Full Spectrum AI analysis platform
IntelexVision delivers a complete, end-to-end solution, enabling effective real-time monitoring of video at scale. Our analytics autonomously monitor and interpret massive amounts of video footage in an unbiased, unsupervised way, using AI to ‘understand’ and learn from any monitored scene, generating critical triggers
iSentry is a full spectrum AI Analysis platform for real-time monitoring of video surveillance imagery, focused on dealing with a wide range of complex, live video environments.
Our unsupervised, self-learning networks extract relevant data from imagery through behavioral anomaly detection (Unusual Behavior) in fluid, busy environments, or advanced motion analysis (TREX) in more static surroundings where intrusion detection is required.
These alerts are then classified for better contextualization and sent to a powerful rules engine which automates part of the decision-making process.
Put simply, iSentry monitors massive volumes of CCTV footage and alerts control room when it observes something out of the ordinary.
iSentry delivers financial return on existing CCTV surveillance infrastructure that is normally only used for forensic analysis of past incidents by providing real-time analysis and response capabilities immediately.
This is the most important iSentry algorithm. Detection of Unusual Behavior is driven by an unsupervised AI platform. AI learning is based on pixel analysis. The system ‘learns’ how objects normally move in a scene; after establishing a norm, the system creates an alert on any deviations. The amount of video to be analyzed by a monitoring control center operator is reduced by up to 98%. A single Unusual Behavior license often replaces 5-10 licenses based on Rules Based algorithms.
The iSentry Platform is exceptional at target acquisition thanks to its ability to learn the environment covered by cameras. iSentry accurately extracts true target information even under the most challenging of environmental conditions. It can distinguish between moving foliage and true targets so that it delivers minimal false negatives. Additionally, its proprietary neural networks perform industry leading visual or thermal camera object classification.
The platform provides expert configuration ability that eliminates environmental noise. Its multi-scene analysis detects small targets at great distance as well as accurate human tracking and classification at short distance, simultaneously on the same view. iSentry is also capable of running on PTZ cameras, providing autonomous scene stability tools and alert suppression during camera movement from a pre-set to the next position.
We understand that every property has unique security needs. That’s why we offer customized security solutions tailored to our clients’ specific requirements. From the initial consultation to ongoing maintenance, we work closely with our clients to ensure their complete satisfaction.
For sterile environments or specific areas of a video scene, iSentry has easy-to-use, multi-directional video tripwires to alert on all moving objects within the specified area that cross a defined line.
Left Object Detection will create an alert when an object enters a scene and remains stationary for more than a predefined amount of time or if an object is removed from a scene.
Typically, up to 80% of alerts can be handled by the iSentry platform without the need for human intervention. The inherent risk of automation is largely mitigated by the underlying engine design where rules are applied only when their results are certain. Key to the iSentry philosophy is that any alert that fails automation will be placed in front of an operator for further investigation and decision. Together with the rules engine this drastically reduces the number of false positive alarms by as much as 50 - 70%.
iSentry efficiently decodes multiple video streams in real time which are then analyzed by one or more of the analytics based on Artificial Intelligence, resulting in an “alert”, which will then start the iSentry process.
A certain number of detected frames, extracted from the alert video, are analyzed by a GPU-based Deep Learning server. This process maximizes processing efficiency and gives the system a much greater understanding of the alert.
With the greater understanding gained from step 2, the system will automatically dismiss many alerts.
Alerts that are not automatically elevated to alarm in step 3 are presented to the operator as a list of current alerts each containing classified images and a +/- 5-10 second video clip. The operator then decides whether an alert is important (escalating to alarm) or not, thus eliminating most false positives.
Once an alarm is generated either automatically by step 3 or by the human in step 4 the iSentry process is complete. All data associated with the alarm including video, classifications, metadata, and operator input, is included with the alarm to be processed.
iSentry’s Centralized Architecture has the advantage of low complexity taking advantage of economies of scale. In the case of a network containing many cameras, it could require extra bandwidth to ensure centralization of all video in a single central location.
Distributed Architecture is suitable for a central control room monitoring several small and large distributed sites. It is not limited in terms of camera numbers with the entire processing requirement handled on site. Sites become fully autonomous, each with the capacity for their own control room, with records and data stored on site.
Distributed Architecture is suitable for a central control room monitoring several small and large distributed sites. It is not limited in terms of camera numbers with the entire processing requirement handled on site. Sites become fully autonomous, each with the capacity for their own control room, with records and data stored on site. Only alert data is transmitted to the central control room and therefore bandwidth usage is limited to just the alert and video data for each alert.
This architecture takes advantage of a central Video Monitoring System (VMS) with distributed processing while limiting the bandwidth required to stream live video. All iSentry processing is done on the embedded edge device (such as an NVIDIA Jetson nano), including Deep Learning, importing live video from t
This architecture takes advantage of a central Video Monitoring System (VMS) with distributed processing while limiting the bandwidth required to stream live video. All iSentry processing is done on the embedded edge device (such as an NVIDIA Jetson nano), including Deep Learning, importing live video from the cameras and then sending only alert data and alert video to the control room central location.
Only the first processing layer of iSentry is managed on the edge device (such as a Raspberry Pi) while subsequent Deep Learning and Rules Engine processing layers are managed centrally. The advantages of this architecture are that Deep Learning processing can be a fully shared resource in the Control Cen
Only the first processing layer of iSentry is managed on the edge device (such as a Raspberry Pi) while subsequent Deep Learning and Rules Engine processing layers are managed centrally. The advantages of this architecture are that Deep Learning processing can be a fully shared resource in the Control Center and that a wider range of embedded devices are supported.
ALIGNTECH INTERNATIONAL
Headquarter: 2106 HDS Business Centre, Cluster M, JLT, Dubai, United Arab Emirates
Email Address: info@aligntechme.com
Copyright © 2022 Align Group - All Rights Reserved.
Powered by Aligntech
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.