TalkSign, an artificial intelligence startup, is developing real-time communication tools that translate American Sign Language into speech and text in under 100 milliseconds, aiming to bridge one of the most persistent gaps in global communication accessibility.
Founded by Edidiong Ekong and Kazi Mahathir Rahman, the company sits at the intersection of artificial intelligence and assistive technology, combining machine learning research with product development to address long-standing barriers faced by deaf and hard-of-hearing individuals.
The startup’s ambition is to enable seamless, real-time interaction between deaf and hearing individuals without the need for interpreters or intermediary systems.
“The goal is to allow people who cannot hear or talk to communicate in real time without barriers,” Ekong explains.
What You Should Know
Before co-founding TalkSign, Ekong worked across several startups and growth-focused companies, helping scale products and drive revenue in different markets. However, TalkSign represents a clear shift in direction, from building for commercial expansion to building for accessibility and social impact.
At the heart of the company is a question Ekong has reflected on since childhood: what would it take to make communication truly accessible for everyone?
For most people, communication is seamless and often taken for granted. Conversations in hospitals, meetings, airports, and public spaces happen without friction. Information is exchanged instantly, and understanding is assumed.
For more than 430 million people globally who are deaf or hard of hearing, however, those same environments can present significant barriers.
A deaf patient in a hospital may struggle to describe symptoms accurately without an interpreter. Critical information can be delayed or lost. In workplaces, meetings can move too quickly to follow, while in public spaces, routine announcements may be missed entirely.
Invisible Barriers in Everyday Life
Ekong points out that these challenges are not isolated incidents but recurring patterns that shape access to healthcare, education, employment, and public services.
“A lot of information is lost,” he explains, particularly in high-pressure environments such as medical settings where accuracy and speed are essential.
Even in less critical situations, communication gaps can have lasting consequences.
Job opportunities may be missed, participation in meetings can be limited, and everyday interactions often require additional effort or planning.
Reimagining the Communication Environment
The idea for TalkSign began to take shape following a personal experience during a meeting, where Ekong observed a deaf participant relying entirely on captions to follow the discussion. While functional, the experience highlighted the limitations of existing accessibility tools.
That moment prompted a shift in thinking, from adapting individuals to their environment, to adapting the environment itself.
“How do we modify the environment to support accessible communication? And how do we empower individuals with the tools that make communication easy?” he says.
Those questions now form the foundation of TalkSign’s product philosophy.
How TalkSign Works
The system combines mobile processing, artificial intelligence models, and smart glasses designed to project translations directly onto the lens in real time.
One model converts speech into sign language, while another translates sign language into spoken words and text. Together, the system is designed to enable direct, bidirectional communication between deaf and hearing users.
The goal is to remove dependence on human interpreters and allow natural, real-time interaction in everyday settings.
Potential applications range from healthcare consultations and workplace meetings to education and entertainment, including real-time accessibility for film and media content.
Technical and Linguistic Challenges
Despite its promise, building such a system presents significant technical challenges. Unlike spoken languages, sign languages are not universal and vary across regions and communities. American Sign Language itself differs structurally from other sign systems globally.
In addition, available datasets are limited and often fragmented, making it difficult to train models with high accuracy across diverse contexts.
A central part of TalkSign’s approach is community involvement. Rather than developing in isolation, the team works directly with deaf users to test, validate, and refine the system.
User feedback plays a key role in improving model performance and ensuring the technology reflects real-world communication patterns rather than theoretical assumptions.
This iterative process is intended to reduce errors and improve usability over time, particularly as the system scales across different environments and languages.
Offline Capability and Real-World Deployment
TalkSign is also designed with low-connectivity environments in mind. In markets such as Nigeria and other regions where internet access may be inconsistent, the system can process data directly on-device.
This offline functionality is critical to ensuring accessibility tools remain usable outside high-bandwidth environments, particularly in public services and rural areas.
As TalkSign continues development, its broader goal is to shift how communication accessibility is understood, moving it from a specialised accommodation to a standard layer of interaction across digital and physical spaces.
While the technology is still evolving, its ambition is to make real-time, bidirectional communication between deaf and hearing individuals not an exception, but an expectation.
Talking Points
It is significant that TalkSign is attempting to solve accessibility not through human intermediaries, but by using AI to enable direct, real-time communication between deaf and hearing individuals.
This approach positions the startup at the forefront of a growing shift in assistive technology, where the focus is moving from accommodation tools to full communication integration.
At Techparley, we see this as an important development in how artificial intelligence can be applied to long-standing social and infrastructural gaps, particularly in accessibility and inclusion.
The decision to combine speech-to-sign and sign-to-speech translation in real time, supported by smart glasses, reflects a bold attempt to make communication more natural and less dependent on external support systems.
As TalkSign evolves, there is a clear opportunity for partnerships with healthcare systems, educational institutions, and public sector organisations to accelerate deployment and ensure the technology reaches those who need it most.
With the right execution, TalkSign has the potential to redefine how accessibility is experienced, shifting it from an assisted process to a seamless, real-time form of communication.
——————-
Bookmark Techparley.com for the most insightful technology news from the African continent.
Follow us on Twitter @Techparleynews, on Facebook at Techparley Africa, on LinkedIn at Techparley Africa, or on Instagram at Techparleynews.

