Enterprises are using multiple types of AI applications, with one in ten enterprises using ten or more. The most popular use cases are chatbots, process automation solutions and fraud analytics. Natural language and computer vision AI underpin many prevalent applications as companies embrace the ability to replicate traditionally human activities in software for the first time, according to MMC Ventures.
Nowadays AI, also as a buzz word, has dominated almost all technology related discussions. I can even risk a statement that there is not a single company in the world that has never considered placing AI at least in their <5 years roadmap planning. Moreover, we use it daily. Be it our smartphones or Amazon devices when we say “Call my wife” or “Alexa, open Pandora” or our TVs/Internet TV boxes while browsing through online streaming repositories and getting recommendations, Cars which display recently recognized road signs, conferencing systems that replace our backgrounds during the so-called “shelter at home” era and many more.
AI systems have traveled a long way since the first official workshops on the subject that were reportedly held around the mid-1950s. Since then and thanks to tremendous progress in many areas like new algorithms design, specialized hardware and cloud services becoming available, so-called data explosion enabling quality AI training, both open-source and proprietary software libraries development, growing investments, number of applications and an increased demand, AI has become a vital tool augmenting human capabilities across industries.
One such example where AI delivers value is through Machine Vision. Machine Vision or Computer Vision enables machines to identify objects, analyze scenes and activities in real-life visual environments. It does so by leveraging Deep Learning. Sometimes Deep Learning is supported by other techniques which in certain scenarios increase its effectiveness. In other words, thanks to all of these technologies or techniques cameras can see and notify people about i.e. detected fire or quality issues being diagnosed in production lines, count objects on conveyor belts, analyze medical images, monitor buildings and inspect construction areas, or even guide robotic arms through various motions. If something can be captured on a picture or a video, chances are machines can be trained to analyze and identify it as well.
Computer Vision (Machine Vision) enables machines to identify objects, analyze scenes and activities in real-life visual environments.
1. Fundamentals of AI, ML and Deep Learning for Product Managers
2. The Unfortunate Power of Deep Learning
3. Graph Neural Network for 3D Object Detection in a Point Cloud
4. Know the biggest Notable difference between AI vs. Machine Learning
Possibilities are endless and thanks to many research works being constantly conducted in the space these are mostly limited by humans’ creativity. Needless to say, how greatly such AI systems can help humans become more effective, assist them in demanding and repetitive tasks (i.e. exhausting and manual images or documents analytics), or even offload us from dangerous or simply boring jobs (i.e. “Again, I need to enter these documents into the systems…” — meet John and learn how he solved this problem by reading his story here).
If something can be captured on a picture or a video, chances are machines can be trained to analyze and identify it as well.
byteLAKE helps succeed with AI in many ways:
- 👩🏫👨🏫 Through AI Workshop we listen and do our best to understand the needs and goals of our clients and partners. Our experts help them ask proper questions, explain technologies that might be useful and even shadow their teams to better understand daily tasks. Then we assist them in preparing deployment plans and new technology roadmaps. We help understand which challenges could be addressed with existing technologies, which would potentially require additional research efforts, guide about best ways to transform early ideas into tangible results etc.
- 🤔👁 Proof of concept is a natural next step to demonstrate the first tangible benefits: process optimization, task automation, increased system reliability etc. This is also the stage where we help and guide our clients to prepare and collect the right data. Our experienced data scientists help process them and prepare for AI algorithms. Also, some of the key design decisions are taken here. All focused on delivering the most efficient solutions.
- 🧠💪 Solution delivery naturally follows and in our case this is done in Agile sprints, meaning you get results every 2 weeks. One thing worth mentioning here is that we have established a strong research practice for a reason. It happens that most of our clients come with requests where we hardly ever can build an AI system purely from ready-made components. Although computer vision as a technology has already a bucket full of various libraries, track of projects and what have you, at least our projects very often are far more ambitious than what the off-the-shelf components have to offer. Besides, our experience in a very closely related area called HPC (High-Performance Computing) helps us deliver solutions that are truly scalable and produce results efficiently at every stage of the AI application lifecycle: training (=when we teach the algorithms to do certain things) and inferencing (=when the algorithms do their work).
Along the way, and especially in Computer Vision projects, a natural question is: shall we run AI on the server or closer to the data is created or stored? As difficult as the question might sound, the real concerns behind it usually are: what is the amount of data our system is supposed to generate? Are we planning to build small devices that need some sort of AI as well (i.e. a medical diagnostic device)? Is the telecommunication network in our factory reliable? Will it offer enough bandwidth? And many others we help our clients and partners consider during the AI Workshop phase.
Another tricky part is related to hardware design but here the rule of thumb is that most of our clients leave this task to us. And this is also where we closely work with our partners to select the right components, be it an embedded device or a large scale data center.
Then comes the real use case. We have delivered plenty and started wrapping some of them up into our products. Let me only mention a few to show how AI, and Machine Vision in particular bring value across industries.
- Manufacturing / Industry 4.0 / Factories
Here most of the scenarios we have been working with are about production lines monitoring and dangers identification. Sometimes it is about product quality inspection. There the cameras are used to monitor the products on conveyor belts. Highly optimized AI algorithms take pictures, analyze them in real-time and notify other systems immediately as faulty products or irregularities are identified. In other situations, byteLAKE’s solutions monitor production processes and detect i.e. dangerous situations (like oil stains potentially causing failures), unusual incidents (like wrong proportions in chemical substances potentially generating issues) etc. Bottom line is that instead of forcing humans to look for hours at certain areas just to wait for events that might or might not happen, we might as well consider placing AI-powered cameras there to monitor and notify us about suspicious events.
We are working on another product of ours, in consortium with a company named Protel (Turkey). While this is not yet a time to share the details (press release is being planned), I can tell that the AI-powered cameras will support a variety of self-service related functionalities. Nevertheless, if you happen to work in the hotel industry, do reach out to us as for sure we are about to have something cool for you and your visitors.
- Agriculture / Forestry / Government / Medical
Many scenarios. From trees counting and illegal dumping areas localization, through traffic analytics, and to complex visual data analysis directly on the small, constrained, embedded devices. Other times it’s about building monitoring and i.e. detecting certain behaviors at the entry gates (i.e. people wearing a helmet). We are also working with 3D visual data and building systems that guide robotic arms to perform various tasks. 3D medical data analytics and numerical algorithms for implants related work have also been within the scope of our work.