All of the deductive methods may be for inductive argument or inference provided one or more of the propositions is probable rather than certain. For example:

Type Name Method – place probably appropriately Induction Modus Ponens p > probably q, p; therefore probably q Induction Modus Tolens p > q, probably -q; therefore probably -p Induction Chain p > q, q > r; therefore p > probably r Induction Disjunctive 1 p v q, p; therefore probably q Induction Disjunctive 2 p v q, p; therefore probably q Induction Addition 1 p; therefore probably p v q Induction Addition 2 q; therefore probably p v q Induction Conjunctive 1 -(p & q), p; therefore probably -q Induction Conjunctive 2 -(p & q), q; therefore probably -p Induction Simplification 1 (p & q); therefore probably p Induction Simplification 2 (p & q); therefore probably q Induction Adjunction p, q; therefore probably p & q Induction Reductio 1 p > -p; therefore probably -p Induction Reductio 2 p > (q & -q); therefore probably -p Induction Complex constructive p > q, r > s, p v r; therefore probably q v s Induction Complex destructive p > q, r > s, -q v -s; therefore probably -p v -r Induction Simple constructive p > q, r > q, p v r; therefore probably q Induction Simple destructive p > q, p > r, -q v -r; therefore probably -p

One of my "long-term" projects ("long-term" implying "I'll probably never even start") is to make an AI that can estimate reasonable percentage chances for all the "probably" terms in induction, and plan around the resultant possibilities. Seems to me to be a core root of intelligence.

quote:One of my "long-term" projects ("long-term" implying "I'll probably never even start") is to make an AI that can estimate reasonable percentage chances for all the "probably" terms in induction, and plan around the resultant possibilities. Seems to me to be a core root of intelligence.

for belief calculation check out bayesian logic. for likelihood calculation check out statistical analogy.

and keep checking my work. i need someone knowlegeable to challenge my reasoning.

Almost all of the deductive methods may be reversed and used for abductive argument or inference provided the answer is possible (i.e., plausible) rather than certain. For example:

Type Name Method Abduction Modus Ponens p > q, q; therefore possibly p Abduction Modus Tolens p > q, -p; therefore possibly -q Abduction Chain p > q, q > r; therefore r > possibly p Abduction Disjunctive 1 p v q, q; therefore possibly -p Abduction Disjunctive 2 p v q, p; therefore possibly -q Abduction Addition 1 p v q; therefore possibly p Abduction Addition 2 p v q ; therefore possibly q Abduction Conjunctive 1 -(p & q), -q; therefore possibly p Abduction Conjunctive 2 -(p & q), -p; therefore possibly q Abduction Simplification 1 p; therefore possibly (p & q) Abduction Simplification 2 q; therefore possibly (p & q) Abduction Adjunction No abductive equivalent Abduction Reductio 1 unknown argument Abduction Reductio 2 unknown argument Abduction Complex constructive p > q, r > s, q v s; therefore possibly p v r Abduction Complex destructive p > q, r > s, -p v -r; therefore possibly -q v -s Abduction Simple constructive p > q, r > q, q; therefore possibly p v r Abduction Simple destructive p > q, p > r, -p; therefore possibly -q v -r

These have not been tested for abductive validity yet, so use with caution.

Updated mind map site with following brief definition:

Problem Solving is a higher-order cognitive process that occurs when one does not know how to proceed from a given state to a desired goal state. The traditional, rational approach is typically used and involves, clarifying description of the problem, analyzing causes, identifying alternatives, assessing each alternative, choosing one, implementing it, and evaluating whether the problem was solved or not.

I just found your work it is quite impressive. I have experimented with similar bots in the past. There is a variation of ALICE that works with OpenCYC. I was succesfull in running some tests with it, and it could do things like "What is the capital of Greece?" or "My master is human", but nothng more than that. But this was not the biggest problem as there were several other cons that finally made me adbandon the project:

1) Passing knowledge to Cyc was an extrmeley difficult task and a full-time work 2) Cyc already had too much knowledge that was useless for my domain. I tried to extract a small portion of the knwoeldgebase but it was impossible 3) The vast knowledge of Cyc make it extrmeely demanding in computer resources. Although it is quite fast to infere things, it can take up to 200MB for a simple inference to run.

Based on my experiense, I have a number of things to suggest:

a) If the basic rules of your systems are ready, i.e., your bot is ready to learn, release it to a specific group of users. Allow them to teach some basics in their domain of interest, and then to test it under realistic conditions. In a real application, your bot should be able to achieve 80% of accuracy in understanding the user's input. Are you sure that your main architecture can handle that? b) Individual knowledge is better than collective knowledge. This is where the main disadvantage of opencyc lies, don't repeat the same mistake. If your users want to merge their knowledgebases later on simply allow them to do that. c) Don't release it on the web. You will simply end up with a bot learning useless things like "I don't want to marry you" and others. d) As an inteface designer, I am thinking always first in terms of real applications and then as an AI guy (which is mostly my hobby). I am not sure if all the functions that you are trying to implement can be used by a real application. Why don't you try with somethinng simpler for start and then implement the rest. It could be something like:

input -> parsing -> matching with existing knowledge /questions (if a bot can achieve 80% accuracy in this part of the equation then we have something worth of using in real world applications) -> trigger appropriate script answers -> output.

Of course to keep context the bot should also be able to save the previous input of the user to some kind of working memory.

giannis, Thanks for your comments. I owe you a free copy of Harry.

quote:Based on my experiense, I have a number of things to suggest:

a) If the basic rules of your systems are ready, i.e., your bot is ready to learn, release it to a specific group of users. Allow them to teach some basics in their domain of interest, and then to test it under realistic conditions. In a real application, your bot should be able to achieve 80% of accuracy in understanding the user's input. Are you sure that your main architecture can handle that?

quote:d) As an inteface designer, I am thinking always first in terms of real applications and then as an AI guy (which is mostly my hobby). I am not sure if all the functions that you are trying to implement can be used by a real application. Why don't you try with somethinng simpler for start and then implement the rest. It could be something like:

input -> parsing -> matching with existing knowledge /questions (if a bot can achieve 80% accuracy in this part of the equation then we have something worth of using in real world applications) -> trigger appropriate script answers -> output.

Which real world application would you find useful?

My PhD area is virtual humans in mobile guide systems. Your approach sounds very interesting for the Q-A part of my work, where the user asks questions to learn more about a monument after the character has presented it in detail. Actually with very little information in its knowldgebase Harry can answer many interesting questions without any problem, simply by using its logic. I have seen that is possible with the OpenCyc version of ALICE and I am 100% sure that this is possible with your Harry as well. Just one question... I am sure that Harry's Brain can handle quite complex questions, but how do you plan to catch the user's input in order to be analysed by Harrys logic brain? In ALICE we used patterns... with the same pattern the system could answer several questions. How do you plan to do this in your Harry?

Please if you are true in your word, please give me Harry with a compeltely blank KB and let me teach it. I will then report the results of the evaluation back to you. By the way do you have a timeframe for the release?

quote: Just one question... I am sure that Harry's Brain can handle quite complex questions. One question.... as I haven't seewn your code, in ALICE we used patterns to catch the user's input before passing it to the OpenCyC KB. With the same pattern the system could answer several questions.

Can you give me an example?

quote: How do you plan to do this in your Harry?

Harry is designed to parse paragraphs and sentences into simple, compound, or complex propositions using a pattern recognition database for the current language. The patterns in the language database include wildcards which allow Harry to recognize many statements.

Another database contains the rules that Harry uses for inference.

A third database is used for knowledge storage and is separated into long term and short term memory.

The pattern matches the user's input and the tags are linked with ResearchCyc (is the research version of opencyc) for translation to the internal language of Cyc and answer generation. One pattern can answer several questions. A quite effective solution for knowledge-based bots, but as I wrote above with several problems.

Please tell me do you have a timeframe for the release of Harry?

quote:Please tell me do you have a timeframe for the release of Harry?

I may have an alpha release available in a couple of weeks. which means that only some of Harry's functions are working, and the user must be ready to deal with a partialy working application.