Virtual Humans Forum
Virtual Humans Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ
Virtual Humans
Partner website:
Chatbots.org
 All Forums
 Virtual Humans
 Artificial Intelligence
 Neural Based AI - NBAI (Next Generation AI)

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkInsert EmailInsert Image Insert CodeInsert QuoteInsert List Spell Checker
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
   


T O P I C    R E V I E W
hologenicman Posted - May 15 2008 : 02:17:56
I would like to see a standardized interface paradigm for the next generation of neural based AI:

Sensory Input:

Monitor feed
keyboard feed
mouse feed
Game controller/joystick feed
system resource monitor
Bioptic camera feed
Stereo(or greater) microphone feed
Allowances for laboratory sensors such as electronic smell and taste

Motor Output:

keyboard emulation
mouse emulation
game controller/joystick emulation
system commands
Servo control of cameras (such as the "Orbit" cameras)
stereo speaker output and volume control
allowances for additional motor controls (such as robotics/home automation)

This degree of standardized AI environment would allow for various developers to create varying "black box" engines to act at the core of these standardized environments.

If nothing else, the creation of such a multi-faceted standardized environment would get developers on the path toward AI that is more richly interactive with the sensory and motor needs required by a next generation AI.

John L>
IA|AI
15   L A T E S T    R E P L I E S    (Newest First)
hologenicman Posted - Jun 20 2013 : 21:07:32
Thank you.

Yes, the broader the interface the better it can be applied to any AI application and share programming resources across varied projects.

John L>
HarrayAI Posted - Jan 09 2013 : 20:29:21
I'm all for a standardised interface - and I think your suggested list is pretty comprehensive - and I think you should aim broader too - this can be the interface for ANY AI technology, not just NeuralNets. : )
hologenicman Posted - Sep 04 2012 : 19:03:57
Ah, found a public copy on my personal website.

I'll get the date into it before too many days go by...

John L>
hologenicman Posted - Sep 04 2012 : 19:00:09
Hi Art,

It's good to hear from you.

I'll put the copyright date into the document as you have suggested.

However, I don't believe that I still have any copies that are publicly visible anymore...

I'll go back and thread in the date on all of the copies that are sitting on my server so that if there are any public links to them, they will then have the date in the copyright.

Later,
John L>
art Posted - Sep 04 2012 : 17:26:20
Hi John,

Been some time... I have been following through these parts as time allows.
You know, Mendicott was correct regarding the following:
It is a good work, John and you really should include a date. Just MHO. Be well! - Art -

The copyright symbol (the letter C in a circle), or the word "Copyright," or the abbreviation "Copr."

The year of first publication of the work. In the case of compilations or derivative works incorporating previously published material, the year date of first publication of the compilation or derivative work is sufficient. The year date may be omitted where a pictorial, graphic, or sculptural work, with accompanying textual matter, if any, is reproduced in or on greeting cards, postcards, stationery, jewelry, dolls, toys, or any useful article.

The name of the owner of copyright in the work, or an abbreviation by which the name can be recognized, or a generally known alternative designation of the owner.

Example: copyright 2002 John Doe
The or "C in a circle" notice or symbol is used only on visually perceptible copies.
hologenicman Posted - Sep 03 2012 : 19:58:43
Hmmm,

So that would be equivalent to when we are looking and something gets our attention and our brains then instruct the eye to focus better on that structure...

I intend to have the organs/robotics all give full detail to the hologenic brain and have the brain choose what information resolution to process that information. The example of a room full of sound; the brain picks out the relevant voices.

However, following your post (and the functioning of the human body), output from the brain should be used to direct the robotics/organ to physically adjust for better tunning of the sensory input. The example of focussing the eye or moving the eye to aim it at the subject of interest...

In an animal example, the brain could give output to the motor control of the ears to turn or adjust the ears to point in the direction of the incoming sound of interest...

As an aside, there is a lot of automatic adjusting done without the input of the brain. This adjusting and movement is mostly for the purposes of "tracking". The eye is continually self adjusting its position to keep the eye centered on the subject matter without having to trouble the brain with such details. However, having the output from the brain take overriding control of such adjustments when needed for the practicallity of focus and better control of the input is a very natural behavior that truly needs to be emulated.

John L>
HologenicKid Posted - Sep 03 2012 : 19:00:13
Maybe the AI should have acess to the higher resoulution freaquencys, but (like in humans)operates in the lower level of acuteness for standby. This could be used for focusing on a person when they speak, or trying to find some one in a room of voices. It could be used for every function we use, muscles, eyes, brain power, and anything you have.

And to program this would be relitvly simple, the output would include a signal that tells the "organ" that th AI needs more data;

eye-----------(+5/9 rez m)

the dashes are the normal output, and in the () is the sub-command. +: raise, 5/9: the importance, rez: the resoulution, m: the middle or detail part of the eye. this is only for the eye, the middle and the edge.

would this be less to program than always have all the rez or would the preprograming cansle-out the gain?
hologenicman Posted - Sep 03 2012 : 05:53:36
It basically boils down to the fact that everything is a compromise in some way or another. Nature tends to find the best compromise for survivabillity and those without the best ballance tend to die off. That's kinda how the genetic programming that Mikmoth was mentioning works.

The more individual frequencies that you report input from, the better and finer the resolution is, however, this requires more processing power and more resounces. In a reduced resource situation or survival mode, this high use of resources can be detrimental to the best operation of the unit...

Sometimes, it is best to have a more gross(less fine tuned) reporting of neural data for a quicker and more base understanding of the environment.

There are examples of having both fine and gross neural input selectively available in the human body as well. The human eye has super high color detail in the center of the retina for daytime use, however, when night adaptation takes over for night time survivabillity, the higer concentrations of black and white low light receptors are concentrated in the perifery of the retina where one can detect attacking animals coming from the edges of our vision...

The decisions of how to design these resolutions may come from trial and error, but we can start out by studying the human body and it's function as a pretty good starting point...

John L>
HologenicKid Posted - Sep 02 2012 : 07:05:30
When programing, is there very much differince in useing every frequency versus groups of 5 or 10? And will that affect the understanding of what it heard and reproduces? Or would it be better with every one of them or every two.
hologenicman Posted - Aug 30 2012 : 22:39:35
quote:
Left ear = L
Right ear = R
Time between each wave = .

Directly in front

R...R...R...R...R...R...R...R...R...R...R... ect.
L...L...L...L...L...L...L...L...L...L...L... ect.

moves right
> >
R...R...R...R...R...R..R..R..R..R..R.R.R.R.R.R.R ect.
L...L...L...L...L...L....L....L....L.....L.....L.....L.....L ect.

moves left

R...R...R...R...R...R....R....R....R.....R.....R.....R.....R ect.
L...L...L...L...L...L..L..L..L..L..L.L.L.L.L.L.L ect.


This could then be preprossesed by sending a message to the brain when the information changes.To signal the start it would be;

F 1,234 R(--) L(--)



Interesting...

There is actually something like this in the biological hardwiring of the retina in the eye. Groups of neurons act together with thresholds to accumulate signal before they trigger. one such group will only trigger if an object is moving from left to right at a minimum speed. Another group will act together only when the object moves from right to left. In this manner, the eye "Pre-processes" the information and is able to send pertinent information to the brain with fewer neurons.

Remember that with the available hardware and technology, there are a lot of cheats that can be used to present neural data to the hologenic brain.

An audio signal can be broken down with fourrier transform into a graphical representation that can be fed to the hologenic brain as graphical neural data.

A video feed can be recreated multiple times with various details and information extracted and then each of those separate feeds can be simltaneously fed into the hologenic matrix.(red, blue, green, greyscale, line-detail, motion-trails, brightness, etc.)

John L>
hologenicman Posted - Aug 30 2012 : 22:27:48
quote:
The idea is tempting, but how would one pc know for sure what the other means? It seams like it would be represnt of language, set inputs that mean certian things. But the problem is that even with a language, humans can say one thing and be interperted as something completly different.



The external environment is intperpreted and represented entirely different within each and every brain(hologenic or otherwise). There is a personalized storage of data.

The video feed for connectivity is intended for spreading a single hologenic brain over several different pc's(hardware) while maintaining a single functioning hologenic brain that is distributed physically over those pc's. The separate pc's don't need to understand any language between each other since the video feed is just the means of communicating the neded shared information which is personal to the hologenic brain as a whole.

John L>
HologenicKid Posted - Apr 19 2012 : 20:12:46
The idea is tempting, but how would one pc know for sure what the other means? It seams like it would be represnt of language, set inputs that mean certian things. But the problem is that even with a language, humans can say one thing and be interperted as something completly different.

And another thing that poped into my head while I got caught up is: How many little hair cells that sence vibrations are there? One for every frequency in the human hearing range? About 19,980. And each could be measured in every way needed with a single byte, on and off. And the example that the sound source moves across the room quickly will be dealt with by the addition of another ear seperated by the head.Exp.


Left ear = L
Right ear = R
Time between each wave = .

Directly in front

R...R...R...R...R...R...R...R...R...R...R... ect.
L...L...L...L...L...L...L...L...L...L...L... ect.

moves right
> >
R...R...R...R...R...R..R..R..R..R..R.R.R.R.R.R.R ect.
L...L...L...L...L...L....L....L....L.....L.....L.....L.....L ect.

moves left

R...R...R...R...R...R....R....R....R.....R.....R.....R.....R ect.
L...L...L...L...L...L..L..L..L..L..L.L.L.L.L.L.L ect.


This could then be preprossesed by sending a message to the brain when the information changes.To signal the start it would be;

F 1,234 R(--) L(--)

If the frequency 1,234 changes to speed up on the left side and down on the right, the message would be;

F 1,234 R\/(--) L/\(--)

the space with (--) is filled with the amount of change;

F 1,234 R\/(01) L/\(01)

This means the source of frequency 1,234 has moved to the left very slowly. And

F 1,234 R/\(01) L\/(01)

means that it has stopped or reversed.

Just a bit of random thinking at 6:00 AM
hologenicman Posted - Jan 30 2011 : 05:48:56

I already knew that my neural input to the HologenicBrain matrix was represented graphically for each of the interface planes, but it just dawned on me that I could have complete neural communication from one pc to another by merely using streaming video.

The video can be cut into blocks to represent the various comunication planes, and then the graphic imprint for that frame of neural data could just be shown on that block of the video.

This will allow for both input and output of neural data for the HologenicBrain.

The compression codecs and such for video are already in place so that I don't have to re-invent the wheel.

John L>
IA|AI
hologenicman Posted - Jan 27 2011 : 18:35:38

Yeah, the FlowStone is pretty powerful for signal processing from the bit that I've played with it.

BTW, FlowStone is set up to work with most of the Phidgets line of boards already. Just drag, drop,and configure...

I will check out the Phidgets API though. Even if I don't go with the FlowStone I will be using Phidgets boards of some sort.

John L>
IA|AI
mikmoth Posted - Jan 27 2011 : 16:28:16
Looks like a powerful piece of software.

Haven't used it... in fact I know very little about it from what I could gleam off the website, but you might also want to consider Phidgets.

http://www.phidgets.com/

Their API is free and they have a lot more hardware available to interface with. Plus since you are controlling the hardware through a very lite api you know exactly what your software is doing.

I love Phidgets. My doll/robot uses it for her eyes.

http://www.youtube.com/watch?v=hm9AN1PNQFM

Good luck... let us know how Flowstone works out.

Virtual Humans Forum © V.R.Consulting Go To Top Of Page
This page was generated in 0.14 seconds. Snitz Forums 2000