Blog

Saturday, 21 May 2016 07:26

Learning About Deep Learning

Written by
Rate this item
(1 Vote)

The concept is certainly compelling. Having a machine capable of reacting to real-world visual, auditory or other type of data and then responding, in an intelligent way, has been the stuff of science fiction until very recently. We are now on the verge of this new reality with little general understanding of what it is that artificial intelligence, convolutional neural networks, and deep learning can (and can’t) do, nor what it takes to make them work. At the simplest level, much of the current efforts around deep learning involve very rapid recognition and classification of objects—whether visual, audible, or some other form of digital data. Using cameras, microphones and other types of sensors, data is input into a system that contains a multi-level set of filters that provide increasingly detailed levels of differentiation. Think of it like the animal or plant classification charts from your grammar school days: Kingdom, Phylum, Class, Order, Family, Genus, Species.

The trick with machines is to get them to learn the characteristics or properties of these different classification levels and then be able use that learning to accurately classify a new object they haven’t been previously exposed to. That’s the gist of the “artificial intelligence” that gets used to describe these efforts. In other words, while computers have been able to identify things they’ve seen before, learning to recognize a new image is not just a dog, but a long-haired miniature dachshund after they’ve “seen” enough pictures of dogs is a critical capability. Actually, what’s really important—and really new—is the ability to do this extremely rapidly and accurately. Like most computer-related problems, the work to enable this has to be broken down into a number of individual steps. In fact, the word “convolution” refers to a complex process that folds back on itself. It also describes a mathematical formula in which results from one level are fed forward to the next level in order to improve the accuracy of the process. The phrase “neural network” stems from early efforts to create a computing system that emulated the human brain’s individual neurons working together to solve a problem. While most computer scientists now seem to discount the comparison to the functioning of a real human brain, the idea of a number of very simple elements connected together in a network and working together to solve a complex problem has stuck, hence convolutional neural networks (CNNs). Deep learning refers to the number, or depth, of filtering and classification levels used to recognize an object. While there seems to be debate about how many levels are necessary to justify the phrase “deep learning,” many people seem to suggest 10 or more. (Although Microsoft’s research work on visual recognition went to 127 levels!) A key point to understanding deep learning is there are two critical but separate steps involved in the process. The first involves doing extensive analysis of enormous data sets and automatically generating “rules” or algorithms that can accurately describe the various characteristics of different objects. The second involves using those rules to identify the objects or situations based on real-time data, a process known as inferencing. The “rule” creation efforts necessary to build these classification filters are done offline in large data centers using a variety of different computing architectures. NVIDIA has had great success with their Tesla (the chip, not the car)-based GPU-compute initiatives. These leverage the floating point performance of graphics chips and the company’s GPU Inference Engine (GIE) software platform to help reduce the time necessary to do the data input and analysis tasks of categorizing raw data from months to days to hours in some cases. We’ve also seen some companies talk about the ability of other customizable chip architectures, notably FPGAs (Field Programmable Gate Arrays), to handle some of these tasks as well. Intel recently purchased Altera to specifically bring FPGAs into their data center family of processors, in an effort to drive the creation of even more powerful servers and ones uniquely suited to performing these (and other) types of analytics workloads. Once the basic “rules” of classification have been created in these non real-time environments, they have to be deployed on devices that accept live data input and make real-time classifications. Though related, this is a different set of tasks and a different type of work than what’s used to create these rules in the first place. In this inferencing area, we’re just starting to see a number of companies talking about bringing deep learning and artificial intelligence to a variety of devices. In truth, there’s little to no new “learning” going on in these implementations—they’re essentially completely focused on being able to recognize the objects, situations or data points they are pre-programmed to look for based on the rules or algorithms that have been loaded onto them for a particular application. Still, this is an enormously difficult task because of the need to run the multiple layers of a convolutional neural network in real time. Qualcomm, for example, just announced their 820 chip, known primarily as the compute engine inside many of today’s high-end smartphones, can be used for deep learning and neural network applications. The new ingredient required to make this work is the Snapdragon Neural Processing Engine, an SDK powered by the company’s Zeroth Machine Intelligence Platform. The combination can be used on the 820 to speed the performance of CNNs and deep learning on devices ranging from connected video cameras to cars and much more. The 820 incorporates a CPU, GPU and DSP, all of which could potentially be used to run deep learning algorithms for different applications. In the case of autonomous cars—which are expected to be one of the key beneficiaries of deep learning and neural networks—NVIDIA’s liquid-cooled Drive PX2 platform can also accelerate neural network performance. Announced at this year’s CES, the Drive PX2 includes two next generation SOCs (System on Chip—essentially a CPU, GPU and other computing elements all connected together on a single chip). It is specifically designed to monitor the camera, LIDAR and other sensor inputs from a car, then to recognize objects or situations and react accordingly. Future iterations of AI and deep learning accelerators will likely be able to bring some of the offline “rule creating” mechanisms onboard so that objects equipped with these components will be able to get smarter over time. Of course, it’s also possible to update the algorithms on existing devices in order to achieve a similar result. Regardless of how the technology evolves, it’s going to be a critical element in the devices around us for some time to come, so it’s important to understand at least a little bit about how the magic works. source: techpinions.com

Read 4682 times Last modified on Monday, 23 May 2016 18:19

Search

Latest Comments

K2 Content

  • A synergetic R-Shiny portal for modeling and tracking of COVID-19 data
    A synergetic R-Shiny portal for modeling and tracking of COVID-19 data

    Dr. Mahdi Salehi, an associate member of SDAT and assistant professor of statistics at the University of Neyshabur, introduced a useful online interactive dashboard that visualize and follows confirmed cases of COVID-19 in real-time. The dashboard was publicly made available on 6 April 2020 to illustrate the counts of confirmed cases, deaths, and recoveries of COVID-19 at the level of country or continent. This dashboard is intended as a user-friendly dashboard for researchers as well as the general public to track the COVID-19 pandemic, and is generated from trusted data sources and built-in open-source R software (Shiny in particular); ensuring a high sense of transparency and reproducibility.

    Access the shiny dashboard: https://mahdisalehi.shinyapps.io/Covid19Dashboard/

    Written on Friday, 08 January 2021 07:03 in SDAT News Read 3461 times Read more...
  • First Event on Play with Real Data
    First Event on Play with Real Data

    Scientific Data Analysis Team (SDAT) intends to organize the first event on the value of data to provide data holders and data analyzers with an opportunity to extract maximum value from their data. This event is organized by International Statistical Institute (ISI) and SDAT hosted at the Bu-Ali Sina University, Hamedan, Iran. 

    Organizers and the data providers will provide more information about the goals of the initial ideas, team arrangement, competition processes, and the benefits of attending this event on a webinar hosted at the ISI Gotowebianr system. Everyone invites to participate in this webinar for free, but it is needed to register at the webinar system by 30 December 2020. 

    Event Time: 31 December 2020 - 13:30-16:30 Central European Time (CET)

    Register for the webinar: https://register.gotowebinar.com/register/8913834636664974352 

    More details about this event: http://sdat.ir/en/playdata 

    Aims and outputs:

    • Playing with real data by explorative and predictive data analysis techniques 
    • A platform between a limited number of data providers and hundreds to thousands of data scientist Teams
    • Improving creativity and scientific reasoning of data scientist and statisticians 
    • Finding the possible “bugs” with the current data analysis methods and new developments
    • Learn different views about a dataset.

    AWARD-WINNING:

    The best-report awards consist of a cash prize:
    $400 for first place,
    $200 for second place, and
    $100 for third place.

    Important Dates: 

    Event Webinar: 31 December 2020 - 13:30-16:30 Central European Time (CET). 
    Team Arrangement: 01 Jan. 2021 - 07 Jan. 2021
    Competition: 10 Jan. 2021 - 15 Jan. 2021
    First Assessment Result: 25 Jan. 2021
    Selected Teams Webinar: 30 Jan. 2021
    Award Ceremony: 31 Jan. 2021

    Please share this event with your colleagues, students, and data analyzers. 

    Written on Wednesday, 23 December 2020 13:45 in SDAT News Read 3545 times Read more...
  • Development of Neuroimaging Symposium and Advanced fMRI Data Analysis
    Development of Neuroimaging Symposium and Advanced fMRI Data Analysis

    The Developement of Structural and Functional Neuroimaging Symposium hold at the School of Sciences, Shiraz University in April 17 2019.  The Advanced fMRI Data Analysis Workshop also held in April 18-19 2019. For more information please visit: http://sdat.ir/dns98 

    Written on Sunday, 21 April 2019 12:18 in SDAT News Read 3735 times Read more...
  • Releasing Rfssa Package by SDAT Members at CRAN
    Releasing Rfssa Package by SDAT Members at CRAN

    The Rfssa package is available at CRAN. Dr. Hossein Haghbin and Dr. Seyed Morteza Najibi (SDAT Members) have published this package to provide the collections of necessary functions to implement Functional Singular Spectrum Analysis (FSSA) for analysing Functional Time Series (FTS). FSSA is a novel non-parametric method to perform decomposition and reconstruction of FTS. For more information please visit github homepage of package. 

    Written on Sunday, 03 March 2019 21:03 in SDAT News Read 2800 times
  • Data Science Symposium
    Data Science Symposium

    Symposium of Data Science Developement and its job opportunities hold at the Faculty of Science, Shiraz University in Feb 20 2019. For more information please visit: http://sdat.ir/dss97 

    Written on Friday, 01 February 2019 00:13 in SDAT News Read 4072 times Read more...

About Us

SDAT is an abbreviation for Scientific Data Analysis Team. It consists of groups who are specialists in various fields of data sciences including Statistical Analytics, Business Analytics, Big Data Analytics and Health Analytics. 

Get In Touch

Address:  No.15 13th West Street, North Sarrafan, Apt. No. 1 Saadat Abad- Tehran

 Phone: +98-910-199-2800

Email: info@sdat.ir

Login Form