Unassisted AI allows for increased processing of data. With new technology affecting both video-taking and processing, the possibilities are endless. This domain is unfamiliar to many, so read on to find out more about what this means for you.
Video analytics, simply put, is the process of gathering real-time visual data from cameras and using the information to recognize patterns in behavior or keep records of events. Right now, video analytics has a few basic components:
Right now, cameras are the highest producers of data. Some examples of data-gathering cameras you might encounter are traffic cameras or store cameras. Since the cost of cameras is going down, they are more accessible and therefore an economical and accurate tool for data collection.
Cameras can send their data for processing in many different ways. Intelligence software can reside on the camera itself, or the camera could be connected to an edge data center or larger data center. If the camera is collecting live streams, large centers are typically used for processing.
Data is then processed, first by unassisted AI and then by humans. About 5% of total video data is analyzed by humans. The rest is filtered through by AI systems in order to ensure time is not wasted on sorting through irrelevant data.
An example of a large-scale implementation of video analytics and unassisted AI is the city of Bristol. Bristol has over 700 cameras, and if a phone call reports an accident or abnormal activity, recordings of the specific time and place can be found later to assist with investigations. Unfortunately, there is no real-time way to respond to situations yet. The big focus for the future is making video analytics something proactive rather than simply reactive.
Cameras obtain a lot of data. To store and process all of this data would take a lot of manpower. Unassisted AI and varying degrees of data processing, however, make it possible to focus on relevant data.
Cameras produce about 10 GB of data a day. In places like stadiums and airports that have their own connectivity, this data can be processed on-site. For more dispersed cameras, data has to be sent to another location in order to be sorted through.
Right now, the big technology in this market is edge computing and new software. Edge computing is when AI intercepts data early on near the source and then sends relevant data further away. Edge computing can also store data as it is being processed in order to prevent flooding important data centers with information.
Usually, filtered data is sent to a central platform that does more high-value analytics. This is where humans finally step in.
Unassisted AI can be trained to identify what is normal and what is not. By doing so, it can filter what information reaches higher data centers. Essentially, AI can define and isolate normal events and mark it as irrelevant. This reduces the amount of time humans have to spend going through video data.
New software is allowing AI to become smarter and therefore more efficient in this process. Many universities are generating free and open source data sets that can be fed into neural networks. This advances the scope of AI through software that is available to everyone.
Another software update that is facilitating the use of unassisted AI is statistical pattern recognition. This software allows normalcy to be defined by a mathematical algorithm, making data processing more methodical. For visual data, normalcy is established through object-detection patterns.
While this all sounds very fancy, unassisted AI and video analytics have implications on both large and small scales.
Video analytics can contribute a lot to business intelligence. Cameras can capture heat maps and record what caught people’s attention, and this can be used to determine strategy and assess foot traffic through a business.
Unassisted AI and video analytics can also be used for safety and security in the near future. Bristol has already begun the trend of being a technology-filled city, but evolving software could make video information available faster so it can be used by law enforcement to react to situations rather than supplement their investigations later.
Additionally, video analytics has many implications for city planning and congestion. For example, video analytics could assess whether parking is available in certain areas or if traffic is bad. Through this, overall lifestyle is impacted.
Video analytics and processing through unassisted AI is just the beginning of a fine-tuned society. The information cameras provide and the work unassisted AI does to filter data leaves major hope for a future filled with useful information that will improve quality of life and business.