Virtual Humans Forum
Virtual Humans Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ
Username:
Password:
Save Password
Forgot your Password?

Virtual Humans
Partner website:
Chatbots.org
 All Forums
 Virtual Humans
 Artificial Intelligence
 Neural Based AI - NBAI (Next Generation AI)
 New Topic  Topic Locked
 Printer Friendly
Previous Page | Next Page
Author Previous Topic Topic Next Topic
Page: of 6

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 08 2008 :  20:38:22  Show Profile  Visit hologenicman's Homepage
Yes, Veryy interesting article.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 11 2008 :  00:10:20  Show Profile  Visit hologenicman's Homepage
Here is a link to my dated work on the HologenicBrain. A lot of advances, refinements, and simplifications have been made since, but the underlying basic premis still holds true in my current application under development.

Of course, I provide this link and the information included in the spirit of advancement of the field of AI and Virtual Humans. No-one is given any ownership of the concepts and ideas presented. This is stated such that I and/or others may continue to be able to use these concepts and ideas to develop applications of our own creation:

http://clovercountry.com/downloads/The_Hologenic_Brain_16.doc

Remember that this document is dated and that it is very "clinical" in the information that is presented. Have fun with it and let me know any feedback that you may have on this thread so that all may share in the conversation.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - May 13 2009 :  01:53:19  Show Profile  Visit hologenicman's Homepage
Here is a great demonstration of what I call temporally cascaded neural input:
(refresh the browser to change the video)

http://www.yooouuutuuube.com/v/?rows=4&cols=4&id=rand

http://www.yooouuutuuube.com/v/?rows=36&cols=36&id=rand&startZoom=1

I found this example and it seems to do exactly what I plan to do for my hologenic Brain's video input with the exception that my visual panels will be stacked like a deck of cards within the 3-d Hologenic Matrix.

http://www.yooouuutuuube.com/

Later,
John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Edited by - hologenicman on May 13 2009 18:35:31
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Jul 18 2009 :  23:20:38  Show Profile  Visit hologenicman's Homepage
I've dusted off the HologenicBrain script and printed it out to get familiar again.

The HologenicBrain receives and sends all input and output as neural values and experiences it's world as a Neural Based AI.

I appreciate the patience with my lack of participation over the last year or so. I've been without a computer capable of handling my application and had just set the project aside until the time had come that I could get a PC with more umph and room.

I am getting ready to put together a new computer with an i7 CPU and 12GB of RAM running VistaUltimate 64bit. This will not have the limitation of 2GB per application that the 32bit xpPro had.

Later,
John L>

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

kenkirkland
Curious Member



58 Posts

Posted - Jul 19 2009 :  19:37:30  Show Profile
Go John go!!!!!!!!!

Ken
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 01 2009 :  22:20:18  Show Profile  Visit hologenicman's Homepage
I am in the process of porting my Pascal source fo the CoPilot application over to Lazarus on my new Vista 64-bit pc.

Thus far, I have removed the TrayIconLaz dependency since it was a utillity that seems to have been intended for 32 bit implimentations. I found the source for it, but i still removed it for now since I want to avoid anything that is not readilly available in the Lazarus installation.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 01 2009 :  22:24:44  Show Profile  Visit hologenicman's Homepage

I am getting a successful build of my source code, but the application then throws a SIGSEGV error and terminates. This is apparently from the misuse of memory.

I have my suspicion that the "byte" memory types that I esed for space savings back on the 32-bit implementation are not valid in the 64-bit environment.

I converted all the byte variables to integers an now I don't get the SIGSEGV anymore.

Now the application causes the debugger to crash all together.

I absolutely love the process of figuring this stuff out.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 01 2009 :  22:32:55  Show Profile  Visit hologenicman's Homepage
BTW, CoPilot is my Neural Based AI interface application.

Once I get CoPilot up and running, I will be converting it over to a .dll which I can provide for testing on 64 bit machines with enough RAM.

This will protect the privacy of my source code while providing the binary for people to use and experiment with.

This is something that I will have to learn how to do.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Edited by - hologenicman on Aug 02 2009 02:24:03
Go to Top of Page

_jc
Curious Member



USA
12 Posts

Posted - Aug 24 2009 :  07:13:49  Show Profile  Visit _jc's Homepage
1st post here, been lurking a couple hours.

Please allow me to introduce myself, as Jager said...
New to chatbots, a touch of experience with Neural Nets and Genetic Algos - just started playing with Kari3 Pro, tracked her author to his lair here and found your discussions max fascinating. I come from 3D modeling and 3D scene making digital art, but in past electronics/audio/acoustics/vibration engineer.

@Holo: Humble suggestion for the audio input of HologenicBrain - a "binaural microphone" into (as you specified above) an FFT Spectrum Analyzer (including the phase between the 2 signals).

For those who have not experienced binaural sound - a product was pioneered by Senheiser (in the 70's?) and consists of 2 small pressure mics (the human ear is also a pressure transducer, i.e. not sensing air particle velocity, but differential air pressure - as an analogy, think voltage, not current). Anyway, these tiny mics are worn right at the entrance to the ear canals, using a headband.

These 2 tiny mics sense not only the extent sound field, but the skull cavity resonance, the chest cavity resonance, the effects of the outer ear shape, head shape, etc. When you play back a recording of these binaural mics through headphones(especially when you recorded and listen on your own head) the realism is astounding. For example, you can sense that you turned your head or nodded during the recording.

With their binaural kit, Senheiser included a dummy head, with the correct shapes and skull resonance, shipped in something like a hat box - with a mic stand thread on top, to which you would attach the dummy head and having the chest cavity resonance. So, you could record on this "audio robot", or on your own head.

Another way to record a lot of fine directional and distance info is with a (10 foot?) circular array of 8 mics. This gives enough resolution to very realistically simulate an audio environment (much better than stereo).

By the way, the original "stereo", as pioneered by RCA always used 3 speakers, Left, Center and Right. This allows the left and right speakers to be much farther apart and gives a much better reproduction. Now, with home theater surround sound, we have gone back to the future. But any stereo system can be improved by summing the left and right signals into an additional power amplifier and center speaker and moving the left and right speakers farther apart. A small left minus right speaker behind and above makes a simple surround system.

Looking forward to following the cool concepts you guyz are actualizing!
http://en.wikipedia.org/wiki/Binaural_recording

----------
_jc
Go to Top of Page

GrantNZ
Dedicated Member



New Zealand
2677 Posts

Posted - Aug 24 2009 :  09:58:38  Show Profile
Welcome aboard, _jc I had a quick look at your 3D work (from the web site in your profile) - you've got some good stuff there!

That's fascinating about that binaural sound; thanks for expanding our knowledge The clip on the wiki page is certainly worth listening to.

From an AI/robotics perspective, I wonder if it is better to recreate sound as a human would hear it, or to use a different paradigm? (Say, a circular array of many mics, mounted around the "head" of the robot.)

May I ask what kind of neural net/genetic algorithm work you have done? I'd be very interested to hear what experience you have in those fields

And in any case, have fun here, and feel free to join in any where any time!
Go to Top of Page

_jc
Curious Member



USA
12 Posts

Posted - Aug 25 2009 :  02:25:14  Show Profile  Visit _jc's Homepage
Appreciate the welcome Grant.
As for which audio/acoustic method would be best, I expect an experiment to compare them (and any other top candidates) would be best. But, I strongly suspect that a binaural mic would work well - since it is closest to human hearing (given an accurate dummy head, outer ears and chest and good mics). Fortunately, pressure mics are the most common, smallest, cheapest to manufacture and most accurate types.

As far as GA and NN experience, I sometimes wish I was less of a "jack of all trades". The bit of programming experience I've had leads me to believe that one has to do a lot of programming and "keep a hand in" over time to get any good. I'm forever flitting from one to another of my many interests. I say that because I can only find one GA program for non-programmers like me - a pretty simplistic Excel add-on. Consequently, I have done little with Genetic Algorithms - though I'd really like to use them.

Did do a few interesting Neural Networks - also in an Excel add-on, called "Brain Cell". This was some years ago and there are better products available for non-programmers now, I believe.

Anyway: First I did a straight-forward spreadsheet with an 8 cell matrix for each of the numerals 0-9 and trained a NN on those. It could then recognize an 8 cell matrix with any of those patterns filled in as the correct digit. If I filled the matrix with a random pattern, it would recognize it as "noise" and if I made one with a numeral that had one cell displaced, say an "8" it would recognize it as "almost 8". Quite amazed me, for a simple one layer network.

Then I wanted to try something in physical dynamics. So I developed another NN, bought a couple of balsa wood "10 cent airplane gliders" and took them to a two story indoor office atrium on a quiet Sunday - very still air.

I added a certain amount of weight to the glider's nose, marked the glider's wings off in little squares (square inches of wing surface) with a straight edge and pencil, then adjusted the wing's fore/aft position for a smooth glide slope (through trial & error glides) and marked the best wing position on the body.

Then I snipped a bit off each wingtip and (trial and error again) found the best wing position again. Doing that several times, I developed a table and trained the NN on it.

I then used a different glider with a different nose weight and had the NN tell me how to adjust it's wing fore/aft position with the same increments cut off the wings. It worked quite well!

I'm pretty hazy about this (a few years back) but I saw a short article in an AI magazine, in which a developer had one NN in a feedback loop of another NN. As he "killed" the inner NN (by slowly randomizing it's network connections, as I recall - or maybe by deleting them) the outer NN was reported to get "creative" as it tried to make sense of the failing inner network. As I remember, he had it try combinations of metals and it created some unheard of alloys which turned out to be useful.

This reminds me of the vision experiment where you first dark adapt by looking through a black rubber fitting into a black box. After a certain amount of adapting time, a strobe (shaped like, say a circle with an "X" across it) is flashed at you. As your bleached out eye pigments get replaced, your own neural network + your brain perceive all these crazy shapes made out of bits of that circle with an "X" pattern.

Anyway it was the first time I'd heard of true creativity in an AI.

As a digital artist with a science bent, I try to read what I can about human visual and auditory perception. I don't think anyone has come right out and said it yet AFAIK - but it appears from the latest anatomical work that the large layers of nerve cells connecting the rods and cones of the eye to the optic nerve are actually neural networks - which would mean that a lot of our visual processing happens long before any signals reach the brain.

Fun stuff!

----------
_jc
Go to Top of Page

_jc
Curious Member



USA
12 Posts

Posted - Aug 25 2009 :  02:42:14  Show Profile  Visit _jc's Homepage
Want to add a link to my main online galleries for anyone who wants to see my 3D modeled scenes - without going through my profile and "Art Head Start" web site. By the way, don't want anyone to get the wrong idea - my scenes use mostly commercial 3D models - only a few minor props are from my very own 3D modeling efforts, with a couple of decent spaceship models as exceptions. Still learning to 3D sculpt (using Silo 3D).

http://fineartamerica.com/profiles/jim-coe.html?tab=artworkgalleries

Hope this is a good place for doing so - will also see if I can put that in my profile as well.

Thanks for looking everyone!

----------
_jc

Edited by - _jc on Aug 25 2009 03:02:43
Go to Top of Page

hologenicman
Moderator



USA
3322 Posts

Posted - Aug 26 2009 :  06:10:34  Show Profile  Visit hologenicman's Homepage
Hey there and welcome,

quote:
@Holo: Humble suggestion for the audio input of HologenicBrain - a "binaural microphone" into (as you specified above) an FFT Spectrum Analyzer (including the phase between the 2 signals).



I appreciate the lead.

The auditory array, that I plan to use involves 4 usb mics set up as ears on two sides of a head. There will be an omnidirectional and a directional mic on each side of the head. The head creates a spacial and auditory barrier between the two sets of mics.

The omnidirectional mics gather the ambient sound on each side of the head, and I intended the two directional mics to be forward facing (one on each side of the head) for the purpose of binaural sound which I had understood to be the auditory equivalant of bi-optic vision for localization purposes.

quote:
I don't think anyone has come right out and said it yet AFAIK - but it appears from the latest anatomical work that the large layers of nerve cells connecting the rods and cones of the eye to the optic nerve are actually neural networks - which would mean that a lot of our visual processing happens long before any signals reach the brain.


This is one of the initial bits of info that drew me into Neural nets decades ago. It is really quite amazing. There were some great articles in Scientific American, and a Magazine that I believe used to be called "AI". Also, in the good old fashioned Encyclopedia Britanica that I bought back in 1987 had just about an entire volume on Neural in which they described experiments with cat eyes and the retina's reaction to motion in linear directions for tracking and reacting purposes. I believe that there is a relationship of about ten nerves in the retina to one nerve in the optic nerve(subject to my exageration...) since most of the pertinant information about movement, intensity/change, etc are done as "pre-processing" in the retina itself before that distilled information is sent on to the brain.

Way kool stuff, and well worth digging into.

I equate this relationship to using traditional software routines for preparing the neural data for feeding into an AI matrix. The matrix can be novel and revolutionary, but the pre-processing routines can be off the shelf softeware that has served the purposes needed for quite a while.

I'm really excited to hear your participation, and hope that you continue to keep at it over time. My personal experience is that the hardware has not quite matured to implement my projects as I work on them.

My most recent frustration is that I went over to a Win64 bit platform with 12GB of ram in order to be able to finally expand my matrix into a minimally functional matrix of reasonable complexity and cascade only to find that the Lazarus FPC IDE that I am using uses PE32+(portable Executable format) which is limited to only 4GB of static RAM on Windows64!!!!!!!!!!!!!!!!!!!!

It bugs me soo much that I have set is aside for a few days just to settle down a little bit before continuing so ticked off.

Of course, now that I know that limitation, I can work within it or go over to Linux or just take a mental vacation again until the technology catches up...

Anyway, welcome, and I am very pleased to hear you shaking things up a little around here. I look forward to thr4ouwing ideas back and forth with you.

John L>
IA|AI

BTW, IA|AI is something that we made up for International Association of Artificially Intelligent (of International Association of Artificial Intelligence). I am very proud and pleased that this forum brings together like minded and interested people who are interested in hammering away at stuff like this.


HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

_jc
Curious Member



USA
12 Posts

Posted - Aug 26 2009 :  19:30:10  Show Profile  Visit _jc's Homepage
Well John, that certainly is a frustrating situation! I feel for you, having recently built a 64 bit, quad core machine to speed up my 3D rendering. Will be building a small networked render farm (a "render garden", starting with 3 PCs) when I get one more PC.

As Grant pointed out above, the audio sensor system design is driven by certain design criteria. I can see at least 2 possibilities for criteria:
1. Maximize realism by mimicking the human hearing system. Let's call it "simulated human hearing".
or
2. Maximize acoustic environment information capture. Let's call it "super-human hearing".

From your above, it sounds like you wish to simulate human hearing. As an expert in electro-acoustics and microphones, I humbly suggest that your concept has a flaw. "Directional mics" (called "cardioid's" in the industry, because of their heart-shaped directional patterns) are very poor microphones technically and I'd avoid them altogether.

First, they are barely directional at all. If you examine a plot of sensitivity Vs direction at many frequencies, you'll see that they only attenuate rear sound by maybe 6db to 12db on average (with a sharper dip at one very small and pretty useless angular section), compared to a dynamic range of maybe 90db. Calling 6db to 12db of rejection out of 90db "directional" is almost equivalent to pure advertising hype.

But the big problem is they get their mild directionality by way of constructive and destructive interference - which is very dependent on wavelength (i.e. frequency). So, you get very uneven addition or subtraction with frequency - which plays havoc with the phase response. Back when these mics were invented, no one paid attention to phase, only frequency (that is, engineers looked at the Frequency Domain [Amplitude Vs Frequency] and ignored the Time Domain [Amplitude Vs Time]). Therefore they didn't notice the big phase problem (except in terms of the uneven directionality Vs frequency). With poor/confused phase response one gets a poor ability to sense sound direction. So mixing phase distortion from directional mics into your sensing system is not a good idea.

Also cardioid mics used to have built-in transformers (to match them to the then prevalent 600 Ohm "balanced transmission lines" for common-mode electrical noise rejection. That is, without today's built-into-the-mic miniature mic preamplifiers/line drivers and unbalanced lines). Maybe some are still passive mics like that (no preamp). Such transformers cause even more phase distortion.

Second, human hearing does not get its directionality the way a directional mic does. The human ear uses the same acoustic principle as a pressure mic (pressure mic = omnidirectional mic). That is, differential pressure between a sealed chamber and an open chamber (think rubber balloon stretched over an open metal can and sensing the balloon surface moving in and out as the external air pressure (not air particle velocity) changes). It's fair to say that the human ear IS a pressure mic.

Human sound direction sensing comes from (as you say) perceiving time delays between signals from the 2 ears. In other words, from the geometry of the acoustic environment - sound source(s) distance to each ear and also distance between ears. So, we're taling Time Domain here. And the attenuation (and resonance) of the head plays a part.

Mild rejection of rear sound and amplification (or focusing) of front sound in humans comes from the shape of the outer ear.

So, here is my suggestion to simulate human hearing:
Don't re-invent the wheel. Use the binaural mic, dummy head (having the proper skull resonance) and chest cavity resonance simulator that Sennheiser's experts already developed - or if that's no longer available, something similar. Or, for even more realism, use a human model and put your binaural mic on him or her.

If you want more rear rejection, simply tape cardboard "Mickey Mouse" ears behind the actual outer ears. I've done that while recording performances in a small noisy club and it worked well (except for a few recorded snickers from behind me). That is, the recording without the cardboard "ear extensions" was technically accurate, but not aesthetically so.

That's because at the actual experience, the eyes (thus brain) are concentrated on the performers, so that attention is withdrawn from the background noise. But when playing back the recording through headphones later, the attention is on the sound alone, so the background noise "sounds louder".

This discrepancy between actual "realistic reproduction" and "expected realism" is a constant concern to artists of all types and that's what I adjusted with my cardboard "ear extenisons".

Hope this helps...
_jc

----------
_jc

Edited by - _jc on Aug 26 2009 19:47:50
Go to Top of Page

GrantNZ
Dedicated Member



New Zealand
2677 Posts

Posted - Aug 27 2009 :  07:49:26  Show Profile
Very interesting stuff!

For what it's worth (probably not much ) I have noticed that video games are getting very good at representing a 3D sound field with just stereo output, such that sounds can still instinctively feel "behind you" even out of simple speakers in front of you. Whether this process can be reversed for analaysis of sound is another question; I'm guessing probably not!

Interesting to hear about your successful neural net experiments too! I wonder if anyone has developed a single net that can successfully pull off two tasks, e.g. can identify numbers and can give wing position advice?
Go to Top of Page
Page: of 6 Previous Topic Topic Next Topic  
Previous Page | Next Page
 New Topic  Topic Locked
 Printer Friendly
Jump To:
Virtual Humans Forum © V.R.Consulting Go To Top Of Page
This page was generated in 0.26 seconds. Snitz Forums 2000