A national state of emergency has been declared to prevent the spread of coronavirus infection, and efforts are being made to prevent infection by restricting people's movements to avoid the 3 Cs.
■Powerful analysis in real time
It uses deep learning technology to recognize important demographic information such as age and gender, and also measures many features such as facial expressions, eye position, head direction, viewing time, and clothing color in real time. You can. Also, through deep learning, six types of facial expressions (happiness, anger, sadness, etc.) can be expressed numerically.
■Cross-platform
We provide our customers with accurate and flexible solutions to help their businesses grow and stay ahead in the market. It runs on a wide range of OS environments, from desktop applications to web services or mobile applications.
■“Blur” function for facial images
We only save the acquired attribute data and take security into account. When a face is displayed on the screen, it is possible to add a "blur" effect. The presence or absence of blurring was achieved simply by turning on/off the flag.
■Addition of people count function
Accurate people counting was achieved by detecting people's bodies. It is possible to calculate the conversion rate in real time, which is how many people made purchases based on the number of people who attended = number of purchasers ÷ number of visitors. This allows distribution stores to take real-time actions such as staffing.
■Facial emotion/expression recognition technology
Build a 3D wireframe model of the face using facial tracking. Once the model is built, it tracks head movements and facial features such as eyebrows, eyelids and mouth. In order to search for feature points, it is necessary to select parts of the face and perform repeated calculations, but a method called AdaBoost is used to improve the efficiency of calculation processing.
Measure motion through images in 2D using matching templates between frames of different resolutions. The movement of the 2D image, indicated by the change in movement (vector) from the previous frame, is modeled by projecting it onto a 3D image. Estimates the 3D motion from the 2D motion of several points on the mesh.
This is the version of our website addressed to speakers of English in
the United States.
If you are a resident of another country, please select the appropriate version of Metoree for your country in the drop-down menu.