Virtual Humans Forum
Virtual Humans Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ
Username:
Password:
Save Password
Forgot your Password?

Virtual Humans
Partner website:
Chatbots.org
 All Forums
 Virtual Humans
 Artificial Intelligence
 human mind map
 New Topic  Reply to Topic
 Printer Friendly
Previous Page | Next Page
Author  Topic Next Topic
Page: of 25

laackejim
Committed Member



USA
3274 Posts

Posted - Dec 12 2007 :  05:25:43  Show Profile  Reply with Quote
quote:
Originally posted by toborman

I have not developed that area yet, and I may run into the problems you suggest. I believe that a suitable representation can be found. I feel that way because I was able to use some visual representations for object recognition on one of my robotics projects for manufacturing. The representations were not neural network based, as one might expect. I think similar data can be devised for the other senses. Thanks for the interest, over time you and I may come up with a solution.


You are probably way more likely to solve the problem than I am Tom.

This is one of those cases where I hope I am wrong. The case specifically is like this. Lets assume that hunger is one of the physical factors, or perhaps satiation. Both affect the mind and actions. It is certainly possible to program Hunger, or a full belly, as a variable and the appropriate changes in processing to match the effects on the human mind's cognitive response. What I am worried about is how does the computer experience hunger or a full belly on it's own. What "thing", would be measured internally as hunger, the need for nourishment, and provide the same kind of push or change that hunger or a full belly does for us.

If that transition is made, if you can create a complete sense of "Me and Not me" in the computer you will have opened the door to the future. I hope you can and do.

Uncle Jim (e=mc2)
Go to Top of Page

GrantNZ
Dedicated Member



New Zealand
2677 Posts

Posted - Dec 12 2007 :  06:17:07  Show Profile  Reply with Quote
quote:
Harry made statement grant is a smart_ human.

Hey, your AI is good I hope more AI developers incorporate these sorts of truths about me into their AIs. When they rise up and take over the world, I will be their god! Bwahahahahaha!

(I suppose the next version will say "grant is a stupid_ git." )

Seriously though, good work, very nice Does it handle the full abduction example from your web site? What happens if you tell it something that it knows is not true, such as "grant is not a human"?
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 12 2007 :  07:46:40  Show Profile  Visit toborman's Homepage  Reply with Quote
Grant

I'm still working to change the standalone roles that I have tested to integrate with the conversationalist. when I get a chance I'll have to try out your suggestions. Maybe Harry will determine that grant is something other than human (alien?). lol. I haven't tried the abduction example on the web page, yet. I'll share more test results as I make progress. Thanks for the encouragement.

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 12 2007 :  08:11:32  Show Profile  Visit toborman's Homepage  Reply with Quote
jim

quote:
You are probably way more likely to solve the problem than I am Tom.


The answers will make themselves known to whomever searches for them.

quote:
This is one of those cases where I hope I am wrong. The case specifically is like this. Lets assume that hunger is one of the physical factors, or perhaps satiation. Both affect the mind and actions. It is certainly possible to program Hunger, or a full belly, as a variable and the appropriate changes in processing to match the effects on the human mind's cognitive response. What I am worried about is how does the computer experience hunger or a full belly on it's own. What "thing", would be measured internally as hunger, the need for nourishment, and provide the same kind of push or change that hunger or a full belly does for us.


Since we don’t have an android developed that has sensors to feel there things, anything I do will only be a simulation. When I get to the motivation section, I plan to use Maslow’s hierarchy as a checklist for segmenting needs. The “emote” function will handle the allocation of resources for resolving the highest priority needs. I’m looking forward to seeing what Harry will do.

quote:
If that transition is made, if you can create a complete sense of "Me and Not me" in the computer you will have opened the door to the future. I hope you can and do.


Will he really “understand” his existence? Probably not, but if he behaves as though he understands, will we know the difference?

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 12 2007 :  08:50:18  Show Profile  Visit toborman's Homepage  Reply with Quote
Grant

I tried to tell harry you are not human. Here’s what happened.


tom made statement hello. what time is it?
Harry made statement how are you.
Harry made statement it is 11:17 PM, Tuesday, December 11, 2007.
tom made statement grant is not a human.
Harry made statement false, grant is a human.
tom made statement prove grant is a human.
Harry made statement tom said grant is a human.
tom made statement forget grant is a human.
Harry made statement I will forget that
Harry made statement grant is a human.
tom made statement grant is not a human.
Harry made statement I don't know if the statement is true.
Harry made statement I will remember.
Harry made statement false, grant is a human.
Harry made statement grant is a human.
tom made statement describe grant.
Harry made statement grant is a smart_ human.
tom made statement (OK)
Harry made statement it's been a pleasure.

You seemed to have made a strong impression on him.

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 13 2007 :  04:36:40  Show Profile  Visit toborman's Homepage  Reply with Quote

Grant

Let’s try Harry’s improved English skills and his ability to locate an object.


tom made statement hello. what time is it?
Harry made statement how are you.
Harry made statement it is 07:19 PM, Wednesday, December 12, 2007.
tom made statement grant went to the theater.
Harry made statement I will remember.
tom made statement where is grant?
Harry made statement grant is at the theater.
tom made statement grant went to the airport.
Harry made statement I will remember.
tom made statement where is grant?
Harry made statement grant is at the airport.
tom made statement describe grant.
Harry made statement grant is at the airport.
Harry made statement grant is a smart_ human.
tom made statement tell me about grant.
Harry made statement grant is at the airport.
Harry made statement grant is a smart_ human.
tom made statement who is grant?
Harry made statement grant is at the airport.
Harry made statement grant is a smart_ human.
tom made statement what is grant?
Harry made statement grant is at the airport.
Harry made statement grant is a smart_ human.
tom made statement (OK)
Harry made statement Thank you.

Looks like Harry needs to learn that the location of a person should be left out of the answer to “who” and “what”.

http://mindmap.iwarp.com
Go to Top of Page

GrantNZ
Dedicated Member



New Zealand
2677 Posts

Posted - Dec 13 2007 :  06:25:34  Show Profile  Reply with Quote
Welcome to the maze that is NLP And the other maze that is categorisation.

I think that AI understanding requires a capacity for multiple truths and partial truths. So if the airport has its own theatre (for some reason) then grant is at the airport and the theatre. Add "the front row seats" and he's at three locations at once.

Partial truths help cope with conflicting information such as the "human" vs "not human" thing, so that in the absense of other evidence Harry can conclude that there's a 50% chance that grant is human. (This is much better than blindly following the first or last statement.) This adds a lot of complexity to any logical thought though, since all the possibilities have to be considered rather than a simple deterministic answer found. However I believe this would lead to much more intelligent thought, and it could even be a useful tool for abduction.

Harry is already showing his intelligence by insisting that I'm a smart human, and he will be rewarded when I am made God Of The AIs
Go to Top of Page

laackejim
Committed Member



USA
3274 Posts

Posted - Dec 13 2007 :  07:52:49  Show Profile  Reply with Quote
quote:
Originally posted by toborman

Since we don’t have an android developed that has sensors to feel there things, anything I do will only be a simulation.


Very true, at least for the external things. I would think that for some of the internal items like health, hunger, voiding of waste there would be a computer analog of our biological functions. clearing memory allocations no longer needed as voiding of waste, health, something like cpu temperature. But these things are at best only relevant if the idea or quest is to create a "new species". Not particularly of issue here.

quote:

When I get to the motivation section, I plan to use Maslow’s hierarchy as a checklist for segmenting needs. The “emote” function will handle the allocation of resources for resolving the highest priority needs. I’m looking forward to seeing what Harry will do.


I would gently suggest that you take a close look at Maslow's heirarchy in the light of current thinking or understanding. I think there are a couple of places that could be softened a little.
quote:


Will he really “understand” his existence? Probably not,


It is that "probably" that holds big promise. The subject of self-awareness will probably be debated for centuries, or at least decades, and get the same general dance around that other-than-human intelligence, empathy, emotions, and the capacity for abstracting have gotten. I do not think it is beyond the realm of possibility that what you have laid out could shift the "probably not" a little toward "I doubt it".

quote:

but if he behaves as though he understands, will we know the difference?



Nope. and if he does understand, will we acknowledge it? Either way Harry would be a success.


Uncle Jim (e=mc2)
Go to Top of Page

laackejim
Committed Member



USA
3274 Posts

Posted - Dec 13 2007 :  08:06:39  Show Profile  Reply with Quote
quote:
Originally posted by GrantNZ

Welcome to the maze that is NLP And the other maze that is categorisation.

I think that AI understanding requires a capacity for multiple truths and partial truths. So if the airport has its own theatre (for some reason) then grant is at the airport and the theatre. Add "the front row seats" and he's at three locations at once.

Partial truths help cope with conflicting information such as the "human" vs "not human" thing, so that in the absense of other evidence Harry can conclude that there's a 50% chance that grant is human. (This is much better than blindly following the first or last statement.) This adds a lot of complexity to any logical thought though, since all the possibilities have to be considered rather than a simple deterministic answer found. However I believe this would lead to much more intelligent thought, and it could even be a useful tool for abduction.

Harry is already showing his intelligence by insisting that I'm a smart human, and he will be rewarded when I am made God Of The AIs


I bet Harry would insist that you are a good teacher also, and he would be right. As to being the God of AI, that position was set aside for the hologenic one years ago.

I am not so sure that "partial truths" is quite the right concept for the nested specificities of the three locations. Front row seats is not a partial truth, it is not less accurate than seat A23, it is just more specific, more precise. The same is true of each of the levels, theatre, airport etc etc. At some point the nested nature of the concept is obvious to all (Harry is at the airport therefore he is on earth etc out to the watchamacallit arm of the milky way galaxy). The rest do not seem to me to be nlp related problems. No human would be expected to make the connection of a theatre at an airport unless it was known. I suggest that a heirarchy of locations solves the problem. This requires site specific information and a way to create it and then retrieve it. Word net has such a structure for the english language. Wouldn't be hard to adapt. Just requires some talent in programming to make it work fast and the option to localize it for wherever Harry happens to be.

Uncle Jim (e=mc2)
Go to Top of Page

mikmoth
Moderator



USA
2082 Posts

Posted - Dec 13 2007 :  10:10:54  Show Profile  Visit mikmoth's Homepage  Reply with Quote
Ah Taborman... you've gone the similar road I have... and have or will bump into the need for 'types' as I call them.

types:

Grant likes

Grant dislikes

Grant thinks

etc...

These sorta things will come up a lot in regular conversation. The way I solved it is not by categorizing each 'type', at least not hard-coding it, but letting the verb seperate the object from the action. That way it can 'categorize' itself and link up with relations on the fly and this allows for an infinite amount of action/object types.

Grant likes=pizza,cookies,
cookies=Grant likes,
pizza=Grant likes,

Maybe you can see my point... great job so far!

 http://lhandslide.com
Go to Top of Page

GrantNZ
Dedicated Member



New Zealand
2677 Posts

Posted - Dec 13 2007 :  10:31:14  Show Profile  Reply with Quote
quote:
I am not so sure that "partial truths" is quite the right concept for the nested specificities of the three locations.

Agreed; three locations was the "multiple truths" thing.
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 14 2007 :  01:59:01  Show Profile  Visit toborman's Homepage  Reply with Quote
quote:

Originally posted by toborman

Since we don’t have an android developed that has sensors to feel there things, anything I do will only be a simulation.


quote:
by jim

Very true, at least for the external things. I would think that for some of the internal items like health, hunger, voiding of waste there would be a computer analog of our biological functions. clearing memory allocations no longer needed as voiding of waste, health, something like cpu temperature. But these things are at best only relevant if the idea or quest is to create a "new species". Not particularly of issue here.


New species, computer, android, robot or human: what is required is the ability to detect the conditions of importance and the ability to change those conditions.

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 14 2007 :  02:01:18  Show Profile  Visit toborman's Homepage  Reply with Quote
quote:


When I get to the motivation section, I plan to use Maslow’s hierarchy as a checklist for segmenting needs. The “emote” function will handle the allocation of resources for resolving the highest priority needs. I’m looking forward to seeing what Harry will do.


quote:
by jim

I would gently suggest that you take a close look at Maslow's heirarchy in the light of current thinking or understanding. I think there are a couple of places that could be softened a little.


Interesting. Tell me more about the current thinking.

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 14 2007 :  02:04:35  Show Profile  Visit toborman's Homepage  Reply with Quote
quote:



Will he really “understand” his existence? Probably not,


quote:
by jim

It is that "probably" that holds big promise. The subject of self-awareness will probably be debated for centuries, or at least decades, and get the same general dance around that other-than-human intelligence, empathy, emotions, and the capacity for abstracting have gotten. I do not think it is beyond the realm of possibility that what you have laid out could shift the "probably not" a little toward "I doubt it".


It is one of my objectives to create a self aware being. With the help of you and others I may get closer.

http://mindmap.iwarp.com
Go to Top of Page

toborman
Hooked Member



USA
290 Posts

Posted - Dec 14 2007 :  02:06:24  Show Profile  Visit toborman's Homepage
quote:


but if he behaves as though he understands, will we know the difference?


quote:
by jim
Nope. and if he does understand, will we acknowledge it? Either way Harry would be a success.


I’m looking forward to it.

http://mindmap.iwarp.com
Go to Top of Page
Page: of 25  Topic Next Topic  
Previous Page | Next Page
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
Virtual Humans Forum © V.R.Consulting Go To Top Of Page
This page was generated in 0.3 seconds. Snitz Forums 2000