German electronics company Robert Bosch believes that in order to overcome the Big Data challenge, we must create solutions that intelligently approach all aspects, from edge sensors to centralized sensor hubs to Cloud data analysis.
Fortunately, our brains have the smartest smart sensors - including the eyes, ears, nose, taste buds and touch sensitivity - that shape our e-Big Data solution for the needs of the Internet of Things (IoT).
Marcellino Gemelli, director of business development at Bosch Sensortec, told the MEMS & Sensor Executive Congress (MSEC) at the recent International Semiconductor Industry Association (SEMI): 'We have to feed big data issues into a human-based Model generator and then use this model to predict what the optimal solution should look like: Thanks to the versatility of neurons, these machine learning solutions work on more than one level.
Neurons are the brain's microprocessor - it can accept thousands of big data inputs, but only receive a single voltage burst along the axon after receiving memory synapse-mediated thousands of dendritic inputs In this way, the receiver of the eyes, ears, nose, taste buds and tactile sensors (mainly used for presence, pressure and temperature) can preprocess large amounts of raw big data input and then transmit the digest information along the spinal cord Encoded on voltage spikes) to the hub known as the 'old brain' - the brainstem and automatic behavioral center responsible for breathing, heartbeat, and reflex tasks.
Finally, the preprocessed data reaches the final destination of the conscious part of the brain (cortical gray matter) via a huge interconnected array known as the "white matter." Each part of the cerebral cortex They are dedicated to the senses of vision, language, smell, taste and touch, as well as cognitive functions such as attention, reasoning, assessment, judgment and corresponding planning.
Gemelli said: 'Brain neural network math is equivalent to perception, it can be learned through its variable conductance synapse, and big data through it for streaming.We can add a variety of levels of perception Learn everything human beings can learn, such as the different ways people walk. '
Influence of Moore's Law
Moore's Law also helps to achieve multi-level perception - called deep learning - because it provides a common approach to edge sensors, hub-centric intelligence, and cloud analysis.
Gemelli said: 'First of all, the quantity is very helpful - the more the amount of big data, the better.Secondly, diversity helps to learn different aspects of things, such as the different gait that people use to walk, the third , The speed at which the perceptron has to respond needs to be quantified, and once you have defined these three parameters, you can optimize the neural network for any particular application.
For example, Gemelli said the smartwatch / smart phone / smart cloud combo can control big data separately.The smartwatch evaluates real-time, continuous data from individual users and then sends the most important digest data to smartphones every few minutes.Next, , Smartphones send trend summaries to smart clouds in just a few times of the day.The detailed analysis of the most important data points is performed in the cloud and fed back to specific users wearing smartwatches and to other smartwatches The wearer timely advice on how to achieve the same set goals.
Currently, Bosch is simulating these three levels of brain models by adding processors to its edge sensors, enabling them to recognize and prioritize big data before they are sent to the smart hub.
Gemelli said: 'In particular, smart cities need intelligent sensors with built-in processors to enable real-time edge sensor trends, and they then send these trends to the sensor hub, analyzing and sending the most important messages to the cloud in order to serve the city Managers analyze viable information.
Compile: Susan Hong