This is that time of the year. Summer, you may ask?
No, it’s the time of the year when Ocean Outdoor launches its Digital Creative Competition. And for Quividi, this (selfishly) means that in a couple of months we’ll be able to see how great creative minds twisted our platform to design unique creative concepts.
In the past years, participants used our solution to mess with words and letters displayed on screens to bring awareness to Dyslexia, to hide part of the audience passing the screens to demonstrate the terrible fact that children go missing every year, or to increase attention on violence against Women (did you know that the Women’s Aid campaign originated from this competition? And yes, they ended up winning two Gold Lions, in Cannes).
This year’s competition is special: not only is this the 10th edition but for the first time ever, participants will also be able to design creative concepts for the Piccadilly Lights, probably the nicest and most engaging screen in the world.
Obviously, Ocean Outdoor has plenty of other technologies than ours attached to their screens, that you can use and play with: Full Motion, Wifi, Live Streaming…
So, take the time to brainstorm and please submit an entry.
And to help some of you make the best use of our technology (and, let’s not be shy, win the competition), we thought of creating a cheat sheet of what Quividi can and can’t do for you.
Let’s start by telling you more about the fundamentals of our Audience & Campaign Intelligence platform.
QUIVIDI IS FACE ANALYTICS MARRIED WITH PRIVACY
Our technology processes images from a standard camera which is co-deployed with a Digital Out Of Home screen. It relies on face detection and classification, not on face recognition. These are two different technologies.
Face detection and classification looks for the presence of a face and estimates whether its traits are closer to a certain gender and age. By contrast, facial recognition matches a calculated “faceprint” with an existing faceprint database in order to identify a particular person.
Our platform anonymously detects faces of individuals passing the screens while they are in the field of vision of the camera and estimates some of their demographic and facial characteristics (see below).
This means that the Quividi platform cannot recognize an individual, either in absolute terms (full identity) or even in terms of repeated exposures. It cannot recognize that a person was at a sequence of different locations, or visited the same location twice. If a person leaves the field of view of the camera and comes back, the platform will think that this is a new person as it has no memory of the face it saw previously.
QUIVIDI IS ABOUT CONTEXTUAL REAL-TIME DATA
- Position of each face (X, Y, Z)
- Number of seconds of dwell (since first detection)
- Number of seconds of attention time (ie the part of dwell when the face was turned towards the screen)
- Face points and face direction (we track 68 points on the face and can use them to estimate the direction where the face is turned, or how the mouth is moving. Note that this is not meant to guarantee where precisely the eyes are aiming (our technology isn’t an eye tracking solution).
- Absolute age (+/- 5 years)
- Mood (5 stages, from very unhappy to very happy)
- Presence of glasses, moustache, beard
Our platform is great at powering interactive Digital Out of Home experiences, but some care must be made in how its data is handled to avoid creative and communication pitfalls. Due to natural limitations of using computer vision like Quividi in the real world, e.g. limitations in camera resolution, computing power and algorithms, there will always be a certain error rate in detected audience data.
These are our platform’s known limitations to power creative campaigns:
- Faces are detected within a certain detection distance (generally 10 to 15 meters ) and camera coverage (about 70°);
- It can track up to ~100 faces simultaneously;
- By design, we don’t remember if someone was already detected;
- Face classification isn’t 100% accurate: it can happen that a man is detected as a woman and vice versa in rare cases, or that the age estimate is beyond a +/-10 years interval from the real age. Take these data points as “best guesses”.
- The accuracy lessens when there’s low light (eg at night).
WHAT EXPERIENCES CAN YOU IMAGINE BASED ON THE QUIVIDI PLATFORM?
Well, that’s a bit your job, isn’t it? Happy to help though. We’ve compiled a list of interactive experiences that you can use and build your creative ideas on:
- Leveraging face detection
- Leveraging the number of watchers
- Count the number of persons paying attention (eg the Women’s Aid campaign)
- Play something different when a threshold is attained, either right now on that screen, or altogether
- Leveraging the attention time
- Leveraging the demographic or mood classification
- Play a content tailored to the gender and/or age of the audience. This could be tailored to one person (if s/he’s unique on a certain spot in front of the screen) or depending on the majority demographic group (eg the Aldo & Pandora campaign). You can, for instance, guess the person’s birth year, and assume s/he has known a certain event or pop culture event at a certain age or adapt the language or imagery of your creative if a certain percentage of the audience corresponds to millennials or seniors…
- Hide the persons of a certain gender or age by overlaying a specific shape on top of it, or blurring that zone (eg the NSPCC “Disappearing Children” concept)
- Overlay an image on the video stream coming from the camera over the face or body of a person, depending on their gender and/or mood (eg the Emoji campaign, again)
- Play something if someone smiles (eg the Stimorol campaign)
You want to be careful with sensitive data points such as age and gender, whose misinterpretations can be offensive. Natural error rates could transform a positive and engaging experience for most into a negative experience for a few.
The solution is to make scenarios with age- and gender-targeted content more subtle and less simplistic so this is not be perceived by the viewer.
- Leveraging face or body attributes
- Play something for people with moustaches, beards or glasses (eg the Movember campaign)
- Leveraging the position of the audience
- Play something if someone stays on a certain position
- Play something if someone gets closer to the screen (eg the Lufthansa campaign)
- Detect people running or cycling towards a screen by calculating the speed of change of distance from the camera (if assume they are jogging, offer an energy drink:) ).
- Get eyes or an avatar that looks toward a specific person’s face
- Get people to direct a character on screen by moving themselves (however, this will not work as with 3D cameras, since there’s no easy skeleton tracking with standard cameras)
- Assume the body zone of the person based on its position.
- Leveraging their face movement
- Show something if someone’s mouth is moving (no need for a microphone to detect that someone is speaking or shouting).
- Move something on screen based on the face direction of a specific watcher
- Mixing the Quividi criteria together
- Verify if a male and a female of a similar age are close one another (which might evoke they’re a couple) – same for a family, or for groups of same age persons (which might evoke friends)
- Use AI to compute the interest given to each component of a campaign and over-promote the winning ones (eg the Global Goals campaign)
- See also the GMC campaign for an example of many mixed criterias
- Mixing the Quividi criteria with 3rd party information
- Mix time of the day with demographics (eg overpress adult content around 9PM, but not after 4PM when children are around)
- Mix weather and demographics
- Mix sports news and number of people and/or demographics
- Mix thresholds of certain values (eg pollution level, traffic jam level) and demographics
- Use screen touches (if this feature is available) to register an engagement, and mix it with the demographics of the closest person to the screen
For more information do not hesitate to email firstname.lastname@example.org