人工智能,机器学习和深度学习之间有什么区别?(2)
时间:2016-08-01 来源:Nvidia 点击:
次
这是一个由多个部分组成的系列文章的第一篇,该系列文章介绍了长期的技术记者迈克尔·科普兰德(Michael Copeland)的深度学习基础。
Machine Learning — An Approach to Achieve Artificial Intelligence
Machine learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.
Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.
As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign.
Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error.
Time, and the right learning algorithms made all the difference.
Deep Learning — A Technique for Implementing Machine Learning
Another algorithmic approach from the early machine-learning crowd, artificial neural networks, came and mostly went over the decades. Neural networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.
You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.
Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree ,and so on — and the network architecture then tells the neural network whether it is right or not.
Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of “intelligence.” The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn’t until GPUs were deployed in the effort that the promise was realized.
If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.
Ng’s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep” in deep learning, which describes all the layers in these neural networks.
Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.
Thanks to Deep Learning, AI Has a Bright Future
Deep learning has enabled many practical applications of machine learning and by extension the overall field of AI. Deep learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep learning’s help, AI may even get to that science fiction state we’ve so long imagined.
|
相关文章
- 机器学习产业Longlist长名单(TOP44) 机器学习 人工智能 标杆企业
- 全球及中国机器学习行业发展研究报告(2024-2030年版) 机器学习 人工智能 机器学习报告
- 《全球机器学习行业技术及市场前景展望报告》(全球技术及市场版) 机器学习 人工智能 机器学习报告
- Oracle:什么是机器学习? 机器学习 人工智能 深度学习
- AI,机器学习和深度学习之间有什么区别? AI 机器学习 深度学习
- 汇聚创新,以“视觉”致远——VisionChina2024(上海)机器视觉展圆满闭幕,共绘 机器视觉 展会论坛
- 电池管理系统(BMS)市场调查与技术发展趋势报告(2024) BMS 电池管理系统 BMS报告 电池管理系统报告 研究报告
- 中国机器视觉行业技术标准 机器视觉 政策 技术政策 技术标准
- 2016-2030年全球机器视觉行业市场规模及预测 机器视觉 数据 大数据
- 2022-2030年全球机器视觉应用领域市场规模及数据预测 机器视觉 数据 大数据
- 2022-2030年全球机器视觉应用领域市场规模及数据预测 机器视觉 数据 大数据
- 全球机器视觉产业链竞争情况 机器视觉 数据 大数据
- 中国智能制造主要环节机器视觉应用场景 智能制造 机器视觉
- 2016-2030年中国机器视觉行业生命周期 机器视觉 数据 大数据
- 2016-2030年中国机器视觉行业市场规模及预测 机器视觉 数据 大数据
- IIM信息:全球及中国机器视觉行业分析 机器视觉 机器视觉报告 研究报告
- 中国机器视觉领域企业竞争格局 机器视觉 标杆企业
- 2021-2023年中国机器视觉行业市场占有率及集中度 机器视觉 标杆企业
- 2020-2023年中国机器视觉行业重点企业海内外业务占比 机器视觉 标杆企业
- 中国机器视觉重点企业技术领域与重点技术 机器视觉 标杆企业
- 第104届中国电子展 电子元器件 集成电路 电子 物联网 展会论坛
- 全球手机市场及中国品牌出海市场营销研究报告(2024Q2) 手机 研究报告 手机报告
- 第七届全球电子技术(重庆)展览会 传感器 机器人 智能制造 智慧工厂 工业自动化 展会论坛
- IIM信息发布《全球手机市场及中国品牌出海市场营销研究报告(2024Q2)》 手机 研究报告 手机报告
- 工业和信息化部 国家标准化管理委员会关于印发物联网标准体系建设指南(2 物联网 政策 产业政策 技术政策
- 《物联网标准体系建设指南(2024版)》一图读懂 物联网 政策 产业政策 技术政策
- 工业机器人行业规范条件(2024版)工业机器人行业规范条件管理实施办法(2 工业机器人 机器人 政策 产业政策 技术政策
- 强制性国家标准GB 44497-2024《智能网联汽车 自动驾驶数据记录系统》 智能网联汽车 车联网 自动驾驶 政策 产业政策 技术政策 技术标准
- 工信部:强制性国家标准GB 44495-2024《汽车整车信息安全技术要求》 信息安全 政策 技术政策 技术标准
- 强制性国家标准GB 44496-2024《汽车软件升级通用技术要求》 汽车软件 政策 技术政策 技术标准 车联网 智能汽车 智能网联汽车
- 关于推动新型信息基础设施协调发展有关事项的通知(工信部联通信〔2024〕1 新型信息基础设施 智算中心 超算中心 边缘数据中心 政策 产业政策
- 一图读懂《关于推动新型信息基础设施协调发展有关事项的通知》 新型信息基础设施 智算中心 超算中心 边缘数据中心 量子计算 区块链 政策 产业政策
- VisionChina2024(深圳) 机器视觉 展会论坛
- VisionChina2024(深圳)招展 机器视觉 展会论坛
- 2025第十二届广州国际汽车零部件加工技术及汽车模具展览会 智能汽车