About
Community
Bad Ideas
Drugs
Ego
Erotica
Fringe
Society
Technology
Hack
Phreak
Broadcast Technology
Computer Technology
Cryptography
Science & Technology
Space, Astronomy, NASA
Telecommunications
The Internet: Technology of Freedom
Viruses
register | bbs | search | rss | faq | about
meet up | add to del.icio.us | digg it

A Theory on Neural Signals

by Archis Gore

I have been working on neural networks for the past four years. I have also tried to develop a symbolic-numerical hybrid network. It is still not powerful enough for any practical use, but it proved to work well with my test set. At this stage, it occurred to me that there might be some crucial aspect of neural signals that I might have overlooked which may be allowing the real neurons to communicate better then our neural networks.

The difficulty in interpreting and replicating neural signals arises from the fact that they cannot be completely classified as being either digital or analog. Although all analog signals could be converted into digital, there may be some part of the brain that may process basic analog signals which after conversion into digital format may not reflect some properties of the analog signals. For example, a signal’s magnitude may be of no consequence but it’s phase angle may be an important feature to transfer information. After conversion into digital, though these properties may be verifiable once we know of them, they may be difficult to discover.

At this point, we must take an apparent detour from the above topic for a while to make clear my concept on the above argument. I once read a hypothetical conversation between Sherlock Holmes and Dr. Watson from the book “This Chancy, Chancy World”. Basically what Holmes tells Watson is that once a case has been solved, it is very easy to follow the evidence leading to the conclusion. The same trail of thought is not possible before we know the conclusion. For example, once people were sure that the Pythagoras theorem holds true for any right-angled triangle, dozens of methods of proving it were immediately devised. But when nobody knew about this result, only once person, viz. Pythagoras himself was able to derive it.

The above analogy holds true for neural signals also. My claim is that once we find out some interpretation for the signal, proving it or verifying it for the digitized version of the signal will be relatively easy. But for actually discovering the interpretation, a digital representation may not be best suitable. We must devise some radically different, even eccentric methods to visualize them. I can think of a million possibilities for people to try out immediately. Why not treat the phase angle and magnitude as two different variables. Try representing these in a two-dimensional space. Try a scatter diagram. Find out the correlation between them.

For my next theory, I would again like to start of from an apparently different angle. While in preparation for the ICPC, my team came across a problem where we were required to store the status of the road signals facing each of the four roads approaching the road junction. Due to reasons beyond the scope of this article, we decided to store them as flags in a decimal number. Each direction was represented by a digit at a position with a different place-value. The flags were stored with the north direction having the most significant digit, moving clockwise as the significance of the digit decreased for each of the following directions. For example, 1101 meant that the north was green, the west was green, south was red and east was green. This same information could be represented in a binary number of four bits. But as I explained earlier, we had reasons for using a decimal number. Consider this same number converted in binary format. (1101)10 would be represented as (10001001101)2. But as you can see, even though the same information is conveyed through the binary counterpart, it is very difficult to interpret it if one has no idea of the original intent of the signal. Since each digit in the decimal version has a positional significance which is lost in the binary representation, it may be very difficult for someone who does not know about the format to make any sense from the binary equivalents of the original numbers. Also, a newcomer may get lost in falsely interpreting the almost 7 extra bits which are added due to the large magnitude of the decimal number.

On the same lines, if we digitize a neural signal, it may not necessarily convey the same meaning if represented in a number system of base 3. For example, the number 22 in a number system of base 3 has the decimal value of 8 and a binary value of 1000. But suppose each of the 2’s in the 22 is a three-level flag. In that case, the binary equivalent may not give us the correct information sent by the neuron to the brain. It would be a big mistake to assume the all communications in the brain is based on values of numbers. Values remain same across the number systems of different bases. They are all mathematically equivalent and all equations hold true across them. But what if the basic information transfer of the brain takes place in a different base-number system from binary or decimal? The values would remain same, but what if the position of a digit had any significance? Many people may dismiss such situations by commenting that the signal is analog in nature. But what if the signal is not analog but appears to be analog since we are unable to make any sense from its digitized version.

On an ending note, I would like to introduce a drastically weird but in my opinion, a highly probable possibility. What if the different digits at different positions were in a different base-number system. For example, a three-level flag could be transmitted along-with a 7-level flag. The number 62 would thus be 6*(7^1)+2*(3^0) equaling 43 in decimal which would not quite convey the same information as that conveyed by 62. This is quite probable since different parts of information may require different number of distinguishing states. For instance, the above number may be thought of as an optical signal from the eyes. The three-level flag indicates the colour (Red, Green or Blue) and the 7-level flag gives its intensity. However, to assume that this information is transmitted separately or that all of it is in the same number system would be a fatal mistake. In this way, the neuron may be able to transfer more information in one “clock cycle” and also over an analog signal. So what we may see as 43, if represented as 62 according to the correct format may give us more meaning. Once we know about this meaning, we can easily derive the same meaning from 43 later on.

A third note on neural signals that I want to comment on is less philosophical and more practically applicable. As an experiment, I asked a hundred of my classmates, friends and acquaintances whether in their experience it had ever occurred that when they touched an extremely hot or cold object, they were unable to immediately distinguish between whether it was hot or cold. A majority (around 80%) replied in the affirmative. Although they immediately noticed that they were touching a body of an extreme temperature, they were unable to immediately determine the direction of the extremity. This led me to confirm within my own opinion that the signals indicating the temperature of an object touching us is sort of like a “signed integer”. More clearly, I believe that the magnitude of variation is transmitted independently of the direction. For example, if the skin wants to tell the brain that it is touching an object at a temperature of 40oC, it would send the magnitude, viz. 40, independently of the sign, viz. +. The above example is a rough sketch. In reality, the magnitude may be relative to the current body temperature of the subject.

More generally, it would not be improper to state that the sensory neurons tell the brain when it is in danger and what kind of response is necessary rather than sending it all the details of the current emergency. The details are then sent later on. This same concept applied in robotics can be very helpful. Current robots have sensors which blindly send all information that they receive directly to the Central Processing Unit of the robot. What the human neurons realize is that if the body is in some kind of danger, the brain must be notified of this before anything else. Most of the analysis of determining whether a sensory input indicates an emergency is done on the neuron itself. If there is some nerve damage due to a fire, the brain will already have been told that something is seriously wrong. A natural response would be to run from the place of the fire and the brain could then start damage analysis and control operations. On the other hand, if the communication between a robot’s sensors and processor fails, the robot has no idea that it has to run. So there are many ideas that could be used in robotics given even our primitive knowledge of the workings of the human brain.

Having presented this hypothesis, I would like to now proceed onto another similar and related hypothesis. A very popular example on the success of neural networks as against traditional programs for use in optical signal processing is as follows. Consider a person driving a car is waiting at a road junction for the signal to turn green. He sees many things at a glance. He sees the color of the road, the holes in the road, people in his read-view mirror, people in front of him, etc. He even sees every person’s clothes and their exact colors. But his brain ignores most of this data and processes only the most crucial data which is his path to his destination and the signal permitting him to follow that intended path. On similar lines, neural networks are able to avoid irrelevant data which conventional software will most probably process. Yet there is a difference between not how the processing of a neural network differs from that of the human brain but where it takes place. Though it is a widely accepted belief that the brain is responsible for removing the irrelevant data, I believe otherwise. I believe that the eyes themselves (or more appropriately, the neural bundles within the eyes) are responsible for this “signal filtering”.

It would be rather rash to quickly arrive at the conclusion that the eyes become seasoned to this filtering through experience. This conclusion is somewhat correct but my concept is quite different. If the eyes were to become seasoned to the filtering of signals while driving, they still provide a different kind of filtering when we watch a television. It could be reasoned that the eyes become seasoned to a wide variety of filtering schemes for different activities that we perform. But when we work on a completely different activity, one which we may not have performed ever before in our lives, our eyes can still filter out the information which is irrelevant for that particular activity.

For me to explain what I have in mind, you must indulge me in a diversion once again. For many years, PC processors have had a programmable timer on them. The timer itself had no idea about the interval between which it should generate clock pulses. Whenever software required a specific type of clock pulses, it would program the timer to generate them. On most modern graphics adapters, there are powerful processors. Some of these processors are programmable. There are programmable pixel-shaders. Each application can send the actual code to be executed by these processors to the graphics card. But in essence, the graphics card by itself has no knowledge of what operations to perform depending on the context. The application handling that specific activity tells the graphics card on how the processing is to be done on its data.

Following the same analogy, it is my hypothesis that the brain actually programs our eyes to perform a certain type of processing on the data that they receive. The brain quickly analyses the activity to be performed and determines the processing to be done, adapting it if necessary from time to time. But the eyes by themselves are not so adaptive as we may give them credit for. The eyes simply do not have enough information to determine what is to be done, and neither the storage capacity to store the various types of processing related to various contexts.

In this article I have just written down my ideas or my hypotheses on how the brain may work. I have not yet conducted any serious or extensive experiments to prove or disprove the theories presented above.

 
To the best of our knowledge, the text on this page may be freely reproduced and distributed.
If you have any questions about this, please check out our Copyright Policy.

 

totse.com certificate signatures
 
 
About | Advertise | Bad Ideas | Community | Contact Us | Copyright Policy | Drugs | Ego | Erotica
FAQ | Fringe | Link to totse.com | Search | Society | Submissions | Technology
Hot Topics
here is a fun question to think about...
Miscibility
Possible proof that we came from apes.
speed of light problem
Absolute Zero: Why won't it work?
Why did love evolve?
Capacitators
Intersection of two quads
 
Sponsored Links
 
Ads presented by the
AdBrite Ad Network

 

TSHIRT HELL T-SHIRTS