Virtual Humans Forum
Virtual Humans Forum
Home | Profile | Register | Active Topics | Members | Search | FAQ
Username:
Password:
Save Password
Forgot your Password?

Virtual Humans
Partner website:
Chatbots.org
 All Forums
 Virtual Humans
 Artificial Intelligence
 Neural Based AI - NBAI (Next Generation AI)
 New Topic  Topic Locked
 Printer Friendly
Previous Page | Next Page
Author Previous Topic Topic Next Topic
Page: of 6

raybe
Curious Member



USA
18 Posts

Posted - May 31 2010 :  18:54:26  Show Profile
I would have thought that the inconsistencies of the signals and how your net interprets them would be the goal. Since no environment, microphone or even processor is exact because of the variables. Just too many. But I understand to a degree it really should not make that much of a difference or at least for most enough to make enough of a difference to be concerned about. Remember even when digital recording was released it took some time before they realized that such a clean signal on both ends of the spectrum were not natural. They needed to add noise and harmonics to give digital truer sound in relationship to what people truly can hear not just an accurate 20-20k response which to most engineers or even D.J.s at the time said the music was lacking something and it sounded like the music was hitting a brick wall between those frequencies that supposedly we can only hear. I know your not talking about fidelity in this sense but it was the inconsistencies that made it work.

raybe
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - May 31 2010 :  19:10:09  Show Profile  Visit hologenicman's Homepage
quote:
I would have thought that the inconsistencies of the signals and how your net interprets them would be the goal.

Good thought, but I do believe that the needed inconsistencies have to be environmental inconsistencies.

If you remove the variable of inconsistent audio input hardware and resolve to having the same make, model, device, and configuration of the micraphone/equipment then the inconsistencies in the environment can be mapped out for meaningful interpretation by the NBAI. Even if listening to pre-recorded sounds, they should be played as sound in the room or with headphones and then listened to with the NBAI's dedicated audio input equipment.

To follow the biological example, our ears are stationary relative to our heads and that is a constant variable that allows us to interpret/misinterpret the sounds that we are presented with.

To further expand the biological example, our necks allow our heads to move in relation to our shoulders, but our sence of proprioception includes the sensory and motor information from our neck position and movement in the interpretation of the sounds we receive.

A kool further biological example is that deer have ears that can be steered in various directions to modify the audio signal collected, but similar to our necks, the dear's sence of proprioception tracks the ears movement in relation to it's head and includes this information as input in association with the interpretation of the audio input.

There is an audio mapping of sorts that takes place naturally in the Neural association of audion inputs that is dependent upon the consistent (or proprioceptively aware) configuration of the audio hardware.

Of course, if a particular micraphone becomes obsolete and needs to be replaced, the NBAI will just have to cope with the change as we humans have to deal with prosthetic devices and get used to them. The goal, however, would be to keep those changes in the hardware variable to a minimum and as far in between as possible...

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Edited by - hologenicman on May 31 2010 19:11:23
Go to Top of Page

raybe
Curious Member



USA
18 Posts

Posted - Jun 01 2010 :  02:52:16  Show Profile
If you don't mind me asking? Why would your system need that much information? After all there is a natural reason that a deer can manipulate its ears in that fashion and people really don't need it although it may work better for a particular reason. Again it just seems like the inconsistencies makes us what we are. Just like aging and dirty ears. Besides no 2 outer ears are the same. Different proportioned skulls and resonance. If fidelity plays no role then it still leads back to processing the information. We use electronic processors to correct the differences in signals and with most it will also give you visual details because we can't always detect the variables just by adjusting the environment or the instrument. Just thinking out loud.

raybe

raybe
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Jun 01 2010 :  10:26:50  Show Profile  Visit hologenicman's Homepage
quote:
Just like aging and dirty ears.

I love the examples of aging and dirty ears. It's something that I can relate to...

They are both great examples in that they both happen gradually. This gives the Neural associations the time to gradually adjust to the changes and perhaps even not notice the difference without bringing attention to it.

quote:
If you don't mind me asking? Why would your system need that much information? After all there is a natural reason that a deer can manipulate its ears in that fashion and people really don't need it although it may work better for a particular reason.


I've been following throught with the concept of the thread of Neural Based AI which thrives on the more individual inputs the better so long as those quantities of inputs have functional significance such as the frequency shift or the average, low, and peak values for each frequency range.

The more valid broken out data the NBAI has to associate, the better that association will function.

quote:
Besides no 2 outer ears are the same. Different proportioned skulls and resonance.

Exactly, it may be a funny looking head, but it is my head!

I'm not saying that the input has to be standardized across the industry so to speak. Rather, I am saying that it has to be standardized for the individual NBAI implimentation. For that individual, consistency(or gradual change) is imperative. There can be dramatic differences from one individual NBAI to another individual NBAI.

Perhaps, I have been confusing the issue somehow...

Anyway, as an individual, my ears are very important to me. As compared to another individual, they can have entirely different ears and head size and chest resonance, etc.

John L>
IA|AI

BTW, I've been enjoying our discussion. I've been quiet on the forum for a while, and it is good to be back in the discussion again.


HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

raybe
Curious Member



USA
18 Posts

Posted - Jun 02 2010 :  05:18:13  Show Profile
hologenicman, I also appreciate the time. It is making think of audio in different ways due to your NN examples. I was just passing thru as I usually do when I read about the cardboard ears it made me stop dead. If if any one can confuse a topic don't worry it's probably me. I understand some what better after your last reply. But it still sounds like having your cake and eating it to. More points of reference due give you a overall better picture but the more you use the harder it is to process or the more time it would need. Let me see if I can make sense out of this or let me know if I am heading in the wrong direction. Let's say the instruments are in a stable state but the sound source quickly moves from side to side or in a circular pattern. I would think that having so many points of reference even in proximity of each other that the Net would need to work it's butt off trying to process more information thrown at it from many locations at one time and analyzing all the instruments inputs at the same time to allow it to decipher. Just with two ears when we get bombed with sound from different locations and timing it can drive you crazy. Just to stay on track. Aren't you trying to reproduce something closest to real time hearing or just a better way to improve it using an AI Net simulation?

raybe

raybe
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Jun 02 2010 :  07:35:09  Show Profile  Visit hologenicman's Homepage
quote:
Let's say the instruments are in a stable state but the sound source quickly moves from side to side or in a circular pattern. I would think that having so many points of reference even in proximity of each other that the Net would need to work it's butt off trying to process more information thrown at it from many locations at one time and analyzing all the instruments inputs at the same time to allow it to decipher.

So, with two ears having the audio feed broken out the way I described earlier of
quote:
One could set up 100 individual frequencies per left and right ear by setting 100 Hz steps between 100Hz and 10,000Hz with individual intensity values for each frequency range.

That's 200 auditory neural input values to be fed into the Neural Based AI.

There are other "pre-processing" values that can be extracted as well such as average value for each frequency over time(persistance), peak value for each frequency as well as minimum value for each frequency. That gives 4 values for each frequency range.

There also is a very important biological value of whether the frequency is shifting up or down. This would be extrapulated by comparing the frequency intensity to it's surrounding frequencies over time. That gives two more inupts per frequency(frequency-shift-up and frequency-shift-down).

So, six values per frequency range times 200 frequency ranges totals out at 1,200 audio Neural inputs to be fed into the Neural Based AI.


that is 600 neural inputs per ear for a total of 1200 neural inputs coming from just the two ears.

Here comes the clarification...

All 1200 of those inputs must be processed by the NBAI every time whether there is silence, lots of instruments, a single flute soloist, or any other combination of sounds to be listened to.

All 1200 of those values extrapulated from the two audio input ears must be worked through the NBAI input system no matter what.

NBAI works with a lot of parallel throughput, and it optimizes in it's parallel association, not by reducing the quantity of inputs.

NBAI must be thought of and approached differently than the traditional serial rule based processing that is currently most popular.

So, have I muddied the waters mor or made them a bit clearer...?

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Edited by - hologenicman on Jun 02 2010 07:42:18
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Jun 02 2010 :  07:40:48  Show Profile  Visit hologenicman's Homepage
quote:
but the sound source quickly moves from side to side or in a circular pattern. I would think that having so many points of reference even in proximity of each other that the Net would need to work it's butt off trying to process more information thrown at it from many locations at one time

BTW, It must be understood that we are discussing a reat-time system with a sampling rate sufficient for the Nyquist limit to allow for sufficient temporal mapping of the movements of the sound source.

It is a lot of processing to do so repetitively and fast enough to capture the quick motions of the sound source, but it must be done that quickly or the concerne you had about troubles with quickly moving sound sources will become real troubles.

Also, the sound sampling must be fast enough to capture frequency and amoplitute motion that is changing quickly just as the need exists when capturing a quickly moving sound source.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

raybe
Curious Member



USA
18 Posts

Posted - Jun 05 2010 :  02:39:13  Show Profile
Sorry hologenicman but have been under the weather a bit. To be honest I'm a little clearer and muddy at the same time.
You mention the sound sampling must be done fast enough to capture frequency and amplitude motion that is changing quickly. But inherently frequencies move at different speeds an distance will always add to those differences. Doppler effect for one example. The other subject you mentioned and I'm not sure if that was just for example purposes were the frequency ranges ( 100 Hz steps between 100Hz and 10,000Hz) which gave you plenty of inputs( inputs per ear for a total of 1200 )but you still seem to be limiting the band width. But I do remember you mentioning that fidelity was really not key but the information in the NBAI input system. If that is the case then a part of me understands but the other part is struggling with the fidelity part of this equation. Sorry. Slowly but surely!

raybe

raybe
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Jun 05 2010 :  04:42:57  Show Profile  Visit hologenicman's Homepage
Hey there, Sorry that you have been under the weather.

I see the confusion that I have caused right off

quote:
You mention the sound sampling must be done fast enough to capture frequency and amplitude motion that is changing quickly. But inherently frequencies move at different speeds an distance will always add to those differences. Doppler effect for one example.

I am referring to the rate at which the frequency/pitch is changed and whether it is getting higher or lower.

Think of a slide whistle and when you push in the wire it gets higher, and when you pull out the wire it gets lower...

This can happen sslloooowly or quickly depending upon how fast you move the wire. As that pitch changes, the amplitude in the adjacent frequency ranges of our neural input will move along in a wave-like fashion. Kinda like the audience at a football game doing the "wave..."

If the shifting of the pitch changes very quickly, and we have not taken audio samples often and quick enough(Nyquist Limit) the NBAI will not be able to understand what is really happening with the frequency/pitch change.

This frequency shift up/down is extremely important neural data.

The human voice has simultaneous shifts up and down in the amplitudes of frequencies Making an audio-spectral dance of repeatable patterns. It's why our voices are so discernable and individual as apposed to a pure signal such as that from a flute.

I hope that I am not confusing things worse.

If you have ever seen a water fountain display timed to choriograph with the music...? Each one of those stationary water jet spouts can coordinate with a sound frequency-range.Let's say that we could control the choriography. If they play the slide whistle from high to low, the water plume moves from left to right. If they play the slide whistle from low to high, the water plume moves from right to left.

The human voice is similar to having two or three slide whistles all independently playing up and down and NOT in unison. Imagine the water fountain water jet amplitude following all three of the slide whistles simultaneously. This waterworks display is a good representation of what the human voice audio signal broken out into frequency ranges plus pitch-shift data would be like going into the neural inputs of a Neural Based AI.

I Hope you feel better. Get some rest, and I'll hear from you when you're better.

John L>
IA|AI



HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

raybe
Curious Member



USA
18 Posts

Posted - Jun 27 2010 :  06:27:16  Show Profile
Hello HologenicMan, I'm kind of back and moving again. Wasn't sure if you wanted to continue thread because so much time has passed? Believe it or not I did have something I wanted to respond to from your last post but I was so ill that I thought I wrote some notes because I new I would not have the capability to remember. If you would like to move on or come back to this subject at a later time I understand I truly enjoyed our conversation. Stay well.

raybe

raybe
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Jul 10 2010 :  23:42:50  Show Profile  Visit hologenicman's Homepage
Hi there stranger

Sorry, that I have not been around lately. I've been distracted by my many other hobbies. (and work...)

Yes, I am enjoying our conversation very much, and would like to continue. No matter what else I am doing for hobbies, AI is always somewhere in the back of my mind.

Lately, I have been creating music and posting it on my YouTube channel, but even in that, I have been thinking about timing and the nature of consciousness as I've discussed in one of my other threads...

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=58&whichpage=1

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=676

http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=101&whichpage=1

Please pick up this thread where ever you wish to continue. I'll try to make a point of checking in more regularly.

John L>

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.

Edited by - hologenicman on Jul 10 2010 23:45:12
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Dec 27 2010 :  04:12:39  Show Profile  Visit hologenicman's Homepage
OK, I am getting back into the NBAI project...

My HologenicBrain is pretty much stable at the moment and waiting for the affordable RAM/multiprocessor technology to advance and for the prices to come down.

In the mean time I was going to start working on the utilities for pre-processing the neural data that will be fed into it.

All of my neural input has to be transformed into 2-dimensional graphical arrays data that can be superimposed onto the various input planes of the HologenicBrain.

Mikmoth, once I started working with the graphical arrays as the input medium for the neural data I thought of you.

I was working with the graphical arrangement of the input for the Auditory Plane and I remembered that you used a utility for displaying music graphically.

So, at this time, I am beginning to work on the Graphical input arrays for the Auditory Planes and for the Visual Planes.

BTW, the graphical data can vary in intensity of the individual points(Gray-scale) as well as in the quantity of the points(Bar-chart).

I would love input on a practical utillity that is available for breaking the entire audio spectrum out into a usable graph in real-time.

The audio data needs to also be broken out graphically for highest intensity similar to the peak intensity bars on graphical spectrum displays on stereo equipment.

Another value to graph out is the shift in pitch from lower to higher or higher to lower pitch.

There are a variety of items to graph out, but the basic start is to figure out a utility for graphing the audio spectrum.

Let me know if you can lend a hand.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

mikmoth
Moderator



USA
2082 Posts

Posted - Dec 27 2010 :  04:53:13  Show Profile  Visit mikmoth's Homepage
I use FMOD in my programs to display spectrum and orthographic data. The SDK comes with examples in many langagues.

You can also try BASS which is another audio library.

You definately want to use an audio library. Window's API has nothing on the level of complexity you will need in order to make it work.


 http://lhandslide.com
Go to Top of Page

hologenicman
Moderator



USA
3323 Posts

Posted - Dec 27 2010 :  08:26:43  Show Profile  Visit hologenicman's Homepage
Thanks a lot, Mikmoth,

I will definately be checking out both FMOD and BASS.

John L>
IA|AI

HologenicMan
John A. Latimer
http://www.UniversalHologenics.com

"If the Human brain were so simple that we could understand it,
we would be so simple that we couldn't..."
-Emerson M Pugh-

Current project:http://www.vrconsulting.it/vhf/topic.asp?TOPIC_ID=816&whichpage=1

DISCOVERY: The more I learn, the more I learn how little I know.
GOAL: There's strength in simplicity.
NOTE: Goal not always achieved.
Go to Top of Page

mendicott
Curious Member



7 Posts

Posted - Dec 31 2010 :  07:59:37  Show Profile  Visit mendicott's Homepage
hologenicman, forgive me its late, and I'm working on BibTeX bibliographic citations.. And, I find it really annoying people who do not date their publications.. Could you please tell me the proper issue date of your http://clovercountry.com/Downloads/The_Hologenic_Brain_16_REPAIRED.doc ?

Thanks,

- Marcus Endicott
http://twitter.com/mendicott
Go to Top of Page
Page: of 6 Previous Topic Topic Next Topic  
Previous Page | Next Page
 New Topic  Topic Locked
 Printer Friendly
Jump To:
Virtual Humans Forum © V.R.Consulting Go To Top Of Page
This page was generated in 0.44 seconds. Snitz Forums 2000