An interactive voice experience

To teach a dad to say shoe. It´s not as easy as you think. To be able to say it correct you need to think of your pitch, how you pronounce the words and calculate how long you should say it. Try to say shoe to your "judge" AKA baby and see if you can do it correct.

A voice game with a mission to make dads talk more to their babies.

Together with Edelman+Deportivo we created Baby Talk For Dads, an interactive game-like product for Unicef and H&M Foundation. The mission was to teach dads how they could speak better to their infants. Studies have shown that during the first 1000 days of a baby’s life, their brain can form new connections at the amazing rate of 1000 per second. It’s a once in a lifetime opportunity for the parents to connect with their child.  

Babytalk Screens Phones2
So how did we get this to actually work?

First of all we needed to identify possibilities for a realtime interaction through both the mobile and desktop based web. Since this innovation is fast paced and will travel across many markets we need a stable experience that require no server-side analysis. This was the foundation for the technology. Let us take you through the complete process of designing and producing a digital product like this. 

Babytalk Desktop Start
Step 1: Get a scientist to explain what good babytalk is

The team at Edelman+Deportivo soon identified Dr. Marina Kalashnikova - Researcher in Infancy Studies at the MARCS Institute, University of Western Sydney and the leader of the MARCS BabyLab. Numerous hours later we managed to together with Dr Kalashnikova define what actually is babytalk and how it is different from how you speak.

Step 2: Identify technology and make a dynamic sound engine

We worked out the foundation for the technology by doing fast prototyping and then continued together with our friend and acclaimed digital audio expert, Svante Stadler to develop an advanced dynamic sound engine that in a way works a little bit like Singstar. The basics of the technology is that we analyze the speed, variation, pitch of your voice in realtime and then convert this to a visual feedback.

Step 3: Dynamically read a SVG and draw it in realtime

To accompany the sound engine we needed to design multiple guide-curves as SVG graphics and then use these curves to dynamically draw, whilst playing sounds and interacting with the product. To map these and to seamlessly put everything together performance-wise proved to be harder than we thought. After optimizing the parsing and drawing, we now have a smooth dynamic product that fits both bandwidth and client performance. 

Steg3 Curve
Step 4: Use snapping (rubber effect) to help the user

The UX concept of this whole product is based on the notion that an important learning of this magnitude, even though it could be considered weird speaking like a baby needs to be fun to learn. Therefore the basis of interaction design is game-like. Steps, progress, realtime feedback and ease of use is basic principles that we have applied throughout the product.

One specific detail is the snapping effect that your interaction will have against the curve. Something that acts as a helper and makes the process a little more forgiving.

Step 5: Create the right feeling

We didn't want the game to feel like a ordinary Singstar game. We wanted it to be more childish. The curves that matches the sound enginge have interesting shapes and we based the whole conceptual design on these curves. Based on the UNICEF brand colors the game feeling is imminent and the product feels uniqe and different in each step.

Go try the result for yourself at and learn the importance of speaking baby with your new born.

Do you have a problem to solve?

We work with a wide range of digital products and services, from intricate brand & company websites to hi-fi technical innovations, always executed to the highest of standards. Do you feel inspired? Drop us a line and lets get talking about how we can help you transform your ideas into reality.

Contact us