https://liliputing.com/2018/02/arms-project-trillium-brings-device-machine-learning-smartphones-speakers-cars.html
AI is coming along into the generic acceptance phase starting with stuff you can buy later on this year.Chip designer ARM is unveiling two new categories of processors designed to bring machine learning to low-power devices without the need to phone into a cloud server.
The new ARM ML processor is a new type of chip designed for on-device machine learning, while the ARM OD processor is designer specifically for one type of machine learning: object detection.
They’re both part of a larger initiative called Project Trillium that includes hardware and software to bring AI features to devices with ARM chips including smartphone, smart speakers, security cameras, self-driving cars, and data centers.
ARM is describing Project Trillium as “a new suite of ARM IP” or intellectual property, because it’s not just about new chips: there are already devices with ARM-based chips that use machine learning including security cameras and other Internet of Things gadgets. ARM says device makers can continue to use ARM Cortex-M chips or other low-power processors.
But the new hardware seems pretty intriguing.
The company says its new ARM ML processor can handle 4.6 trillion operations per second while using less than 2 watts while the new ARM Object Detection processor can process full HD video at 60 frames per second detecting individual objects as small as 50×60 pixels.
It’ll be a little while before you start to see devices with those new chips: ARM will start making the new designs available in mid-2018 and we could start to see hardware in late 2018 or early 2019.Remember, the forward seeking chip makers out there haven’t necessarily been waiting on ARM to start adding AI and machine learning capabilities to their own ARM-based designs. Companies including Apple, Qualcomm, Huawei, Imagination, and Rockchip have all introduced processors with dedicated AI and neural networking features inside the past year. Issue with these was a timid size and a lack of an open standard for all products to have interoperability and a lack of an open, uniform data exchange format. This has all now arrived in ARM Trillium.
AI processing power has been doubling every six months so far, and this jump to a generic 4.6 trillion operations per second while using less than 2 watts of power means another doubling pf AI horsepower that is taking place now at the generic introduction of Trillium.
Why is this important? Of the early adopters, Huawei and Rockchip simply recycled last year's chipset with the addition of a small AI module and both of them immediately outperformed the current industry leaders at that point in time for a minimal disruption and a very minimal cash outlay.
But only on certain items and only when using their own proprietary softwares. The size of these very first efforts was 2.4 TOPS and less.
Now here comes a canned generic "anybody can use it" 4.6 TOPS setup complete with Optical Character Recognition software and Machine Learning software all in an industry standard data exchange format.
Now let's slowly repeat an important little nugget for complete clarity. ..... 4.6 trillion operations per second while using less than 2 watts of power ..... In terms of raw speed of operations, this is at a ASCI WHITE year 2000 supercomputer level -- but not in total throughput as supercomputers of that era had MULTIPLE THOUSANDS of full sized desktop processors working all at the same time to hit that 4.6 trillion operations per second (cumulative raw speed of operations) because each individual desktop processor was actually quite a lot slower per processor. And the whole shebang drew a massive 850 kW of electrical power when running at full speed (a small town's worth of power) and also had to have an industrial sized cooling plant to keep the room cool as well.
But hey, it gives you a gut read on what 4.6 TOPs at less than 2 watts of 5 volt power really means .....
ANY FUNCTION that you can program to run here in the AI ZONE runs 100 times faster than your competition's best and most expensive current product at a battery cost of less than 1/10th as much power draw from the battery.YES, EXPECT EVERYTHING TO CHANGE UP TO USE AI VERY VERY QUICKLY. Both Samsung and Micron are now shipping non-volatile 7nm memory products that can run natively at these speeds so memory bottle-necking isn't going to be a show stopper on any properly designed products.