updated over 1 year ago; latest suggestion over 1 year ago
With iOS 11 Apple introduced the Core ML Framework, which provides developers to integrate machine learning algorithms in their apps easily. But what is machine learning and what are the advantages to current implementations of algorithms in plain code? And why the hell you should use it on your own mobile device and not cloud services?
In 2016 Apple already opens with
BNNS in the
Accelerate framework their used technologies for word suggestions or face detection for us. But not many mobile developers adopt to the possibilities, because there is lack of understanding: Mobile developers doesn’t have the knowledge to create neuronal networks, and data scientists doesn’t know many about the specialities of the neural network implementation for the iOS Plattform, especially
At the same time other technologies for creation of technologies for machine learning overcome the market. Data scientist all over the world uses CNTK, Theano or specially TensorFlow to create their neural networks. Apple recognised that movement and delivers with CoreML a framework which can use pretrained neuronal networks from these data scientist tools.
On top of CoreML Apple delivers also pretrained neuronal networks which are e.g. encapsulated for us in the
Vision framework and the linguistic classes of
In my talk I’d like to explain shortly what machine learning is and what a developer already can do with the apis on top of CoreML without any cloud service. The next part would be to create a small neuronal network with some free data scientists frameworks and tools and integrate it with CoreML in an iOS app.
Hi, of course a image analyzing example is one of the most common example. I thought about an emotion detection CNN.
But it would also possible to create a neuronal network for text classification or some calculation like iPhone price prediction from Ebay prices.
Or do you have some use case in mind?
Of course, I do realise this makes things quite a bit harder. Sorry.
Machine learning is such a great new topic. It would be fantastic if the example covered something novel rather than the usual "let's identify a hot dog or a smile in a photo". After all, many developers might benefit from machine learning, but have nothing to do with images, video, or the usual suspects.