Advertisements

AMI (TSE: 3773), Japan’s #1 AI Voice Recognition Company for B2B Markets – H.E.R.O. Innovators Insights from CEO Dr. Kiyoyuki Suzuki | H.E.R.O. HeartWare | 12 November

What would a world with a thousand diverse AI-powered “Alexas” be like in improving the well-being and productivity for both individuals and businesses when the meaning, intention, context, and even emotion of our voice can be recognized and understood with accuracy?

It is past midnight on a public holiday and you need to urgently find a rental property room near a particular train station. You pick up your smartphone and converse with an AI virtual assistant who searches the property database and recommends an appropriate room. Without specialized information, Amazon’s Alexa cannot assist in this request. Leopalace21 (TSE: 8848), one of Japan’s leading real estate company with over 590,000 rooms for lease nationwide, engaged AMI (TSE: 3733) who implemented its cloud-based AI voice dialogue solution “AmiAgent” in April 2018. AMI has also implemented its next-generation AI conversational virtual assistant (3.4% of sales) for clients such as Bank of Mitsubishi UFJ-Tokyo (TSE: 8306), Kansai Electric Power (TSE: 9503) etc.

Advanced Media, Inc. (AMI) is Japan’s #1 AI voice/speech recognition company for B2B markets with its AmiVoice solution commanding a leading market share of 64.6% in multiple applications from medical, call centers, mobile app development for car navigation systems, household appliances, smartphone cameras, robots and IoT devices; conference proceedings, manufacturing, logistics, distribution, construction and property management/maintenance; and language education. Invigorated by advances in artificial intelligence led by Apple’s Siri (2011) and Amazon Echo’s Alexa (2015), speech recognition technology has accelerated to complement or even potentially replace touch to become an important choice of next-generation human-machine interaction in consumer electronics, automobiles, and industrial applications. Solving high-value problems for its customers enables AMI to generate EBIT margin of 20.3% and positive free cashflow (FCF) margin of 21.8% with ROE (= EBIT/ Equity) of 8.9% and ROA of 8.2% and a healthy balance sheet of net cash of 8bn yen (US$70.4m, or 21.4% of market cap) as at its end Sept 2018 2Q financial results. On 9 Nov 2018, AMI announced its 2Q results (Mar-Sep 2018) in which its 1st half sales increased 20.5% yoy to 1.716bn yen, operating profit rose 37.2% to 129m yen, and ordinary profit jumped 2.5-fold to 309m yen. AMI achieved a 73% absolute increase in sales in the recent 3 years and turned around from a loss-making position, which helped propel a 163% increase in market value in the past three years to US$329m.

Tired of listening to endless voice guidance and waiting when you made a call to customer service? AMI has collaborated with U-NEXT Inc (TSE: 9418) to introduce its “AI Concierge” to reduce response time and contribute to efficient operation of the next-generation call center. On 10 Oct 2018, AMI announced a collaboration with Amazon “AWS Connect” to provide its AI speech recognition solution “AmiVoice Communication Suite” to more customers. AMI has also introduced on 9 Oct 2018 an “emotion analysis” solution as an embedded sales to the AmiVoice Communication Suite based on the LSTM (long short-term memory) deep recurrent neural learning network technology that displays emotions of both customers and call center operators, such as joy, sadness, anger, to enable appropriate communication according to the emotions of customers. AMI also provides its AI voice recognition solution to overseas customers, including in Thailand  (4.1% of sales) largely through a partnership with CP Group’s telecommunication giant True Corporation which is the second-largest mobile operator, largest ISP and CATV provider, and the #3 local player in call center outsourcing service; and in Taiwan and China (2.2% of sales) with customers such as China home appliance giant Midea’s entire call center operations. AMI was also awarded in Feb 2017 the Development Innovation Award sponsored by People’s Bank of China’s (PBOC) ICFCC for its efforts in AmiVoice Communication Suite that has helped in the cost reduction and response quality improvement in the call centers of a major life insurance company. Upgrading and transforming the next-generation call centers with its AI voice recognition solution contributes 31.3% of AMI’s sales.

Remember that receptionist humanoid that you either read in news or saw on TV/YouTube in 2017 who can recognize and speak in multi-languages and deployed in hospitality places such as H.I.S. Co.’s (TSE: 9603) popular Henn na (“strange”) robotic hotel chain (http://www.h-n-h.jp/en)? Yes, the AI voice recognition software in the humanoid developed by Sanrio’s (TSE: 8136) animatronics subsidiary Kokoro Co is powered by AMI. The Henn na hotel, which typically have 100 or so rooms and employs fewer than 10 people, leaving most of the reception, cleaning and porter work to robots, has since expanded to six locations. Plans are already underway to open another 50 Henn na hotels by 2020. H.I.S. is shifting more of its focus to the profitable hotel business amid fierce competition in its main hotel and flight-booking operations. H.I.S.’s hotel business had sales of 8.2 billion yen ($72 million) for the year ended in October 2017, up 24% on the year, and its operating profit rose 38% to 764 million yen. Hideo Sawada, worth an estimated US$640 million based on his 35% stake in H.I.S., has the goal to also have the unmanned reception system for sale to other operators and installed at 1,000 hotels. The humanoid Kokoro is also deployed at Narita Airport, Tokyo’s main international gateway, to sell travel insurance, give directions to the nearest restroom or restaurants, etc. Cloud-based voice recognition solution in mobile app development for car navigation systems (NaviTime), smartphone cameras, robots and IoT devices with customers that include HIS Co contributes 15.5% of AMI’s sales.

Curious to hear the conversation between AI AmiAgent and Kizuna AI, the world’s first AI virtual YouTuber with 2.3 million subscribers? Click on this YouTube link (with English subtitles): https://www.youtube.com/watch?v=lnYHJaMeeew

Clinical documentation workload has burdened medical professionals from doctors and nurses to radiologists and pharmacists who spent a great deal of time entering information on the patient story and diagnosis on their computers. Imagine unburdening them if they can spend 50% to 80% less time on documentation. AMI’s voice-controlled efficient hands-free solution that covers medical terminology allows doctors to enter medical information just by speaking, thus allowing input directly into electronic medical charts (a core task at medical institutions), as well as the creation of drug administration instructions and radiogram interpretation reports and the preparation of patient referral documents. With AmiVoice, healthcare professionals can focus on the primary work of medical care to see even more patients as it reduces the time required to create medical documents, creating more time for interviews with patients, promoting better patient/provider relationships and improving the quality of medical care.

AMI has near-exclusive domestic share in the medical field (17.7% of sales) for the creation of voice-controlled medical certificates and reports etc for over 6,000 healthcare facilities, including over 1,100 hospitals; clinics; dispensing pharmacies; radiology/ pathology labs, as compared to the total addressable market of over 9,000 hospitals in Japan, 100,000 clinics, and 55,000 dispensing pharmacies. Solutions available for various specialties include for electronic medical charts, referral letters, nursing records, discharge summaries; for radiology radiogram interpretation reports; for electronic medication records, orthopedic records, ophthalmology records, rehabilitation records, mental healthcare records, pathology reports, dental records; for medical mail and research paper preparation; and voice recognition application for radiologists.

CEO Dr. Kiyoyuki Suzuki, who founded AMI in 1997, shared how AMI’s voice recognition solution is solving social problems in unburdening the workload of medical professionals: “In recent years, shortages of human resources at medical institutions and increased work burden have become major social problems. In such a situation, I thought that the speech recognition technology of Advanced Media could be fully utilized in work reforms. For instance, AmiVoice iNote was used at the Ishikawa Memorial Hito Hospital since June 2018. By speaking to a smartphone, voice data can be entered, saved, and shared as text. The entered information can be confirmed on the PC, and it can be pasted to any system such as electronic medical record, medical treatment document. Moreover, since SNS function is attached, it is possible to share information by individuals or groups by sending and receiving data such as voice and photos, or by chatting. This will shorten the time for document preparation and enable smooth task sharing. In addition, since the usage status of each user can be graphed, it can also be utilized for behavior analysis such as optimization of personnel placement. As a result, the input time per day for medical charts has shortened by 70% to 80%. By introducing AmiVoice iNote, not only was it possible to reduce the input time of medical charts, but also the intervention time for patients increased and the hours of off-hours worked by staff decreased.”

“The medical field and nursing care field are regarded as important fields for our business in the present and future. That is because it is a field where the necessity of writing exists and there is proven and clear productivity results from the application of speech recognition into improving the workflow system. In 2016, the Japan Chain Drug Stores Association recommends the introduction of voice input system as a solution to the prevention of recurrence in inappropriate medication history management. Against such a background, the introduction of AI voice input for dispensing pharmacies is accelerated. The number of hospitals, clinics and dispensing pharmacies in Japan is about 9,000, 100,000, and 55,000 respectively. Our penetration rates are about 15%, 3%, and 3-4% respectively.”

Toyota’s subsidiary Daihatsu Motor has adopted in Feb 2018 AMI’s speech recognition technology and the “AmiVoice Front” wearable to develop a system that recognizes the mechanic’s speech and automatically inputs the inspection results during regular inspection and maintenance of the car. By developing this system nationwide in the future, Daihatsu aims to realize the efficiency and quality improvement of the service work in inspection and maintenance. Similarly, Toyota’s Gifu Body Industry had deployed AMI’s voice recognition system in May 2018 for vehicle inspection work, reducing work time by two-thirds as well as dramatically reducing work errors. AMI also introduced its AmiVoice Super Inspection Platform service to streamline building inspection, drawings and photography work and reduce worktime by 40-50% in over 100 construction companies who were positively surprised at the accuracy in recognizing architectural technical terms, even under noisy environment. Since there are dozens of companies engaged in the post-inspection follow-up work from wallpaper, flooring to kitchen and other parts, AMI also incorporated a “SIP AI” engine to automatically select a partner to follow up, overwhelmingly reducing the work required from returning to office after inspection to sort out and coordinate with the multiple firms.


At Ginza Cozy Corner, which has over 400 confectionery stores nationwide in Japan, approximately 200 kinds of products are always lined up in the shop, including popular cream puffs, fresh cakes, baked snacks and chocolates best suited for gifts. Since 2010, Ginza Cozy Corner has adopted AMI’s voice recognition solution as its distribution center to carry out delivery of goods to stores more efficiently, increasing the speed efficiency by 20% and reducing the error rate by 84% to 1/120,000. In Oct 2018 as it celebrates its 70th anniversary, it redesigned its sorting system to equip with AMI’s deep neural network learning AI speech recognition engine, further improving the speed by 20% and reducing the error rate by another 62% to 1/330,000. MonotaRO (TSE: 3064), Japan’s largest ecommerce operator in B2B MRO (maintenance, repair and operations) industrial supplies handling over 10 million items, has also adopted in Jan 2017 AMI’s voice recognition wearable solution for its staff at its mega Amagasaki Distribution Center where communication sound quality is poor. Sweden’s Trelleborg Sealing Solution (OM: TREL) whose critical sealing solution are deployed in demanding applications from vehicles and aerospace to robots has also reduced precision measurement work by 40% since deploying AMI’s voice recognition solution in Japan. Hankyu Railway has also adopted AMI’s multilingual speech translator wearable solution AmiVoice TransGuide to carry out better communication with foreign visitors to Japan using the metro, including evacuation guidance in the case of disaster even in an environment when there is no internet.

AMI also partnered in Jun 2017 with Fronteo (TSE: 2158) to perform compliance and forensic checks on financial transactions that may conflict with laws and regulations by phone calls, analyzing transactions that may deviate from corporate policy, sales policy, rules, etc. Fujitsu’s Nifty Corp (TSE: 3828) also partnered with AMI to provide an anti-fraud detection smart alert service in phone calls. Nifty adopts AmiVoice AI solution to analyze the content of the call and when it detects a sentence pattern which is easy to use in a transfer fraud case, it notifies the person who received the call and the family members registered in advance. AMI collaborated with FISCO (Jasdaq: 3807) in Mar 2016 to apply its speech recognition technology to transmit information during the financial results IR briefing of listed companies to investors, including big data analysis of the voice database accumulated by FISCO on whether the management remarks on the forecast is confident or conservative to assess whether the stock price is likely to rise or fall due to the management remarks and statements.

At over 100 local government bodies all over Japan, including Tokyo Metropolitan Assembly and Hokkaido Government, “AmiVoice Minutes Production Support System” converts meetings, interviews, seminars and parliamentary remarks into texts in real time. Nippon TV also adopts AmiVoice as a real-time caption production system used to create subtitles of sports broadcasts. AMI’s VoXT (cloud-based voice-to-text service) contributes 9.4% of sales. On 26 Sep 2018, SoftBank announced that it will use AmiVoice for its smartphone voice messaging function Voice Mail Plus to make it easy to check the contents of the voice message where it is difficult to hear voice messages, such as during a meeting or in a train. Semiconductor giant Renesas Electronics (TSE: 6723) announced on 19 Sep 2018 that it has incorporated AmiVoice Micro into a high-performance HMI chip solution that does not require internet connection and with noise reduction technology that realizes high recognition rate under noise environment for application in consumer equipment and industrial equipment.

In B2C applications, AMI’s solution is also used to activate Kyocera’s smartphone camera operation with voice recognition such as cut the shutter, zoom in or out, change the mode of camera, control the recording of video, etc, even under noise environment and when the distance between the microphone and mouth are far apart. AMI has also expanded its AI voice recognition solution into the B2C market with its wholly-owned subsidiary Glamo (11.5% of sales) which has its own AI voice multi-device remote controller iRemocon that works together with Amazon’s Alexa and Softbank’s Pepper humanoid to control aircon, lighting, TV etc.

While we are impressed by the multiple commercial applications in voice recognition implemented by AMI to solve genuine problems, there are some applications which we find are too niche, leading us to ponder on AMI’s process in developing and launching new services and solutions. These include AmiVoice Forensic for autopsy findings to be inputted and shared real-time, developed in response to the social demands of autopsies with only around 11% of dead bodies reaching autopsy due to the shortage of forensic doctors and increase in work burden; AmiVoice Emergency for emergency rescuers doing pre-hospital rescue in ambulance to record and transmit rescue data to the doctor at the hospital in real-time in a noisy ambulance environment so as to obtain appropriate instructions or better prepare the acceptance of the patient at the hospital destination so that life-saving rate can be improved; AmiVoice Reporter for Field Research; AmiVoice for wearable glasses, etc. We are also disappointed that AMI has not been able to scale up its automotive AI voice recognition solution since its partnership with NaviTime in 2013, losing the race in the important smart car market to global leader Nuance Communications (NASDAQ: NUAN) and China’s iFlyTek (SHSE: 002230). AMI’s 2Q results announced on 9 Nov 2018 was not strong with sales increasing 1% yoy and ordinary profit declined 24.7% yoy, although the cumulative 1st half results were generally healthy as highlighted earlier.

However, we think AMI is the most attractive AI voice recognition player in the industry commanding a deserved scarcity premium in valuation with its superior profitability, positive free cashflow and healthy net-cash balance sheet, ahead of Nuance whose operating profitability had plunged 54.2% in the past three years and weighed down by its US$1.84bn in net debt, and iFlyTek whom we are uncomfortable with the risk of its long receivables period and decelerating growth. More commentary to follow later in the comparison between AMI with Nuance and iFlyTek. While we are positive on AMI’s long-term potential after hearing CEO Suzuki’s inspiring entrepreneurial story in pioneering the market, we would prefer to monitor AMI further on its development for a tipping point in its exponential growth trajectory. As compared to last week’s highlight of another listed AI company BrainPad (TSE: 3655), AMI remains in our broader watchlist of 300+ companies while BrainPad is in our focused portfolio of 42 H.E.R.O. Innovators. Thus far, of the 45 entrepreneurs and CEOs whom we had highlighted in our weekly HeartWare, 20 are in our focused portfolio while the rest are in our broader watchlist.

CEO Suzuki shared his goal for AMI to achieve Human Communication Integration (HCI) and how we have now progressed to the firth generation technology stage of super speech recognition: “Advanced Media Inc. is making it possible to communicate with machines with human-like communication capabilities as part of our everyday lives. Service-oriented businesses that target education, medicine, finance, welfare, and other sectors are expected to flourish in the 21st century. Yet the quality of these services will ultimately depend on effective communications, particularly that between humans and computers. Driven by AmiVoice from Advanced Media, this once futuristic notion is fast becoming a reality. Our goal at Advanced Media is to achieve Human Communication Integration, or HCI, an integrated state wherein people can use natural communication to benefit from machines and computers. In this context, AmiVoice has a vital role to play in nearly every facet of industry and society. HCI is already in use for simple and complex tasks alike, including everything from taking meeting notes to running specialized applications in call centers, cloud service companies, and the medical industry. HCI is also expanding into the global arena, making important contributions in fields such as education, logistics, and energy management. As the future unfolds, the spotlight is certain to shine on HCI and Advanced Media.”

“The day will come when machines and human become best friends in making natural conversations. We see a world where machines can do the interactive dialogue that recognize human emotions with artificial intelligence, responding to health counselling, legal counselling, or having a dialogue with the car. Actually, speech recognition technology was born 60 years ago. The first generation technology (in the 1960s) recognizes words. The second generation technology (1991 – 2000) is capable of recognizing sentences, converting voice into text. The third generation technology (2001 – 2008) is recognition of human subjects. However, due to problems caused by speech fluctuation such as pre-learning of utterance, change in speed of speech, intonation, difference in accent, there was a big wall to climb for the technology to spread. The wall is that people must make vocalizations that computers can recognize easily. The fourth generation (2008 – 2010) is ubiquitous in mobile phones, home appliances, cars, etc. The fifth generation (2011 – present) is super speech recognition, the personification that enables human-like speech recognition and dialogue for us to enter into the full-scale soft communication era. The idea of manifesting potential demand is created and delivered in the form of services in the next generation, and the era of IoT requires natural voice communication between people and the machine.”

CEO Suzuki commented that AMI’s speech recognition accuracy is superior to the tech giants GAFA (Google, Apple, Facebook, Amazon): “Now with cloud computing and AI, voice is converted to text via the IoT platform, and its meaning can be interpreted and various acts can be done.  Alternatively, you can acquire important information from cyberspace and tell it with text, still images, movies, voice, and so on. Indeed, it can be said that the era of ‘speech transformation (ST)’ based on AI speech recognition has come. Our speech recognition exceeds that of European and American gigantic enterprises through region-specific type data accumulation, precision improvement know-how, and innovation by AI etc., keeping superiority in recognition accuracy improvement. The feature of AmiVoice lies in its high recognition accuracy. When comparing the voice input of about 200 letters with other AI speech recognition services, it took 245 seconds at Apple’s Siri, 288 seconds at Google, but AmiVoice completed the task in 130 seconds, Including correction time for recognition. We are confident that we offer voice recognition technology beyond GAFA (Google, Apple, Facebook, Amazon). We have a large selection of AI speech recognition services, based on area-specific, high precision speech recognition, at attractive prices which we will use in the coming era of speech transformation together with you.”

On AMI’s business model and applications, CEO Suzuki explained: “Advanced Media has established leading market shares in CTI (our mainstay voice recognition market) and conference proceedings, as well as an exclusive market share in medical applications. Moreover, by introducing Voice Activation Service (VAS) and Voice Data Service (VDS), we have developed the fee-based models needed to achieve sustained revenue streams while laying the foundations for stable sales and profit growth. We have two services. One is write in voice. The other is to move by voice. For example, at a call center, the machine senses and guides the information contained in the voice. The meaning is grasped from information that is transcribed into text, and AI takes action. As the quality of the talk with the customer rises and the efficiency rises, the corresponding time also becomes short, and the customer is satisfied. The need to utilize artificial intelligence and speech recognition technology to solve the response quality and operational efficiency of the next generation call center operators has increased. Also, if you digitize voice, it will be possible to extract useful information for marketing from that data. Recently, with the rapid proliferation of smart devices and the resulting opportunities for applications of voice recognition technologies, we’ve actively invested in R&D to increase the accuracy of our voice recognition technology. We are also investing in R&D for multilingual compatibility, including Asian languages, with our sights set on development in the key Asian markets that will continue to drive the global economy.“

Given that the voice recognition market has an extremely high barrier in every country due to the subtleties in language, CEO Suzuki further comments that the technical terms of various industries AMI has accumulated has earned them supremacy over foreign giant companies: “Together with speech recognition technology, the evolution of new technologies such as AI and IoT is also great. There are two AI voice solutions: AI voice recognition and AI voice dialogue. The former is our AI speech recognition technology AmiVoice, the latter is the AI speech dialogue AmiAgent which we are making a new business. We have grown AI speech recognition through 20 years of innovation in terms of accuracy and its stability has also improved. Speech recognition simultaneously performs two processes of sound processing and language processing, but deep neural network is extremely effective for sound processing. While using AI, the more you use it, the more accuracy you get. However, as for language processing, words cannot be easily raised in accuracy as it is said to be rooted in culture. However, we have accumulated data and technical terms of various industries over the years in the B2B world and grasp the know-how to improve accuracy. GAFA (Google, Apple, Facebook, Amazon) are getting data from consumers, but they do not have any data that can be used with B2B. That is why we have the best speech recognition technology in the world that is beyond GAFA. What they also cannot recognize is the Japanese subtleties. When saying ‘Relationships’, there are two types of ‘tatsu’: ‘disconnection’ and ‘absolute’, and characters cannot be correctly applied. Moreover, since this usage will change depending on the feeling at that time, if you use speech recognition in business, if you do not have this precision, you have to make corrections each time and it is not efficient. In the Japanese market, we have supremacy over GAFA.”

While AMI is profitable and positive free cashflow generative now, CEO Suzuki shared that the entrepreneurial journey to start AMI in 1997 has been full of struggles. CEO Suzuki shared reflectively the inspiring story: “I was born in 1952 in the Atsuta-ku of the city of Nagoya in Aichi prefecture. I grew up with my parents who were self-employed businesspeople running a clothing related business. I graduated from the doctoral program at Kyoto University Graduate School of Engineering in two years and I was thinking of becoming a researcher at the university. Since I need data to prove some research, I moved to the laboratory called Toyo Engineering Corporation in 1978. While I was doing that, I met a prominent mathematician Heisuke Hironaka who won the Fields Awards. The business of AI occurred in the United States in 1984, and its influence came to Japan. So later in 1986, I joined Intelligent Technology Corporation, an artificial intelligence venture company created by Heisuke Hironaka. In 1987, I was dispatched to Carnegie Group Inc, a spin-off company of Carnegie Mellon University (CMU), which is a mecca of artificial intelligence research where speech recognition geniuses gathered at the Robotics Institute and they had accomplished the feat as the highest ranking in speech recognition competitions sponsored by the Defense Advanced Planning Authority (DARPA) of the US Department of Defense. I completed the knowledge engineering engineer training program (KECP) sponsored by the Carnegie Group in four months. There was a curriculum in various languages of artificial intelligence. At the time, there were no companies that succeeded in business in speech recognition.”

“A chance encounter with the speech recognition geniuses at CMU Robotics Institute provided me a hint of speech recognition business success and I encouraged them to collaborate together. I returned to Japan, and in 1989, I took over as the Managing Director after being Director of R&D. I worked on for 12 years to spread artificial intelligence in Japan while strengthening corporation with the Carnegie Group. in 1998, we jointly developed our voice recognition technology. Based on the belief that speech recognition can demonstrate its true value in different language communication, we have created a structure for other languages. We divide the speech recognition engine into a language-independent base part and language-dependent part, and furthermore, we build the base part by adding the region-independent part. At that time, I already developed a large-scale vocabulary of continuous speech recognition engine of 17 countries, and when I experienced the Japanese version by myself, I was excited and surprised by the accuracy and speed of speech recognition. In December 1997, we established Advanced Media to build up a market in Japan. Elderly people, men and women, talking speed, changes in intonation and accents, places with many noises – even under such various conditions, words can be recognized with practical level of accuracy. That is the amazingness of AmiVoice. The year after establishment In 1998, I formed a consortium of 20 companies. I made a beta version free of charge, did validations and improvements, and made an official release of AmiVoice speech recognition engine in July 2000.”

CEO Suzuki shared how they had a difficult time creating a market for speech recognition and that could not create a wave of marketization in the initial period for years even though they had world-class technology with high accuracy by specialization: “Of course, without the world’s best speech recognition technology, we cannot even make a market. Having the world’s best technology is a necessary condition, but that alone is difficult and useless. It is necessary to make it into a business. This means to develop products and services unique to AmiVoice and AmiAgent, create users, and get them to keep using them. The most important thing is that the speech recognition we give is something people in the world have never seen. Many people applaud you if you show them. But there are pitfalls there. Even if you applaud, you will not use it. Since I have never seen speech recognition, I do not have a culture to use it. There are still many people who feel that it is embarrassing to input sounds in public places and roadside. Although we were listed in 2005 with world-class technology with high accuracy by specialization, we had a difficult time creating a market for speech recognition and we could not create a wave of marketization.”

“It is easy to say create a user and get them to keep using it, but in reality this is very hard work and requires time and effort. In other words, this is the result of innovative products and services being born, and the more layers the innovation comes to the front, the more they will jump to them. The thought that we will continue doing it is also expressed in the company logo. The logo shows a small sphere gradually increasing in size in a positive spiral. It also shows that the sphere gradually rising from the plane. The latter means an increase in social value, and at the same time, it becomes an indispensable existence for society. Human Communication Integration (HCI), the company’s vision, is that people benefit from machines and computers through natural communication, and AmiVoice is a fusion of society’s indispensable existence. However, the need for speech recognition rises now due to productivity improvement in labor reform and the artificial intelligence boom. In 2009, Google introduced voice search, and in 2011, Apple introduced Siri to the Japanese market, and the perception of speech recognition and culture of ‘talking to machines’ has progressed rapidly in Japan, and the expectation of it has increased. From now on it will be an era when people receive a wide variety of services via the Internet, smartphones and IoT devices. Since the quality of service depends on the communication level between people and ‘computers’, I think that the demand for AmiVoice aiming for HCI will increase more and more.”

CEO Suzuki added that the “JUI” (Joyful, Useful, Indispensable) philosophy has guided them through the years in commercializing voice recognition: ““We are a group that creates ‘value’ rather than ‘things’. ‘Value’ is what the user decides and it becomes an indispensable presence through continuation. Usage begins with Joyful (fun) and Useful (convenient), and it will become Indispensable by continuing use, our management philosophy by ‘JUI’. Therefore, our business is not to ‘sell existing things’ to existing markets, but to sell ‘things that are not available’ to existing markets. Together with clients and users, we make useful, indispensable things and services. So, what you seek is creativity, a clear image, and suggestion power. Even if you work hard for 24 hours you cannot produce value if you can only see the trees but not the forest. Our mission is to partner with companies that have core technologies in various fields and to create new services. Current artificial intelligence is not enough to co-create with people as soft communication in terms of the heart of hospitality cannot be expressed. I hope to realize Human Communication Integration by utilizing artificial intelligence simply by speaking naturally. Solutions that make use of artificial intelligence that the company aims for must satisfy JUI and hospitality. Our AI voice dialogue technology is used for virtual assistant adopted in the application of Bank of Mitsubishi-Tokyo UFJ (TSE: 8306) and the automatic correspondence search of property for Leopalace21 (TSE: 8848). Because AmiAgent’s user interface is a virtual character that can be customized according to the corporate image, its advantages are that you can enjoy a conversation because it is friendly, it responds to various questions accurately with expressive appearance, it answers jokes, and it also covers fun, and it can help in various concierge services such as recommendations and ticket reservation. We have a company called Glamo whose business is home electronics voice control and sensor. For example, speaking of ‘turn off the light’ or ‘turn on the TV’. In the past, what you were operating with multiple remote controllers can be operated by voice recognition.”

When asked about AMI’s overseas expansion plan, given that overseas markets contribute around 6.3% of sales, CEO Suzuki commented on how they started the partnership with Thailand’s True Corp: “Established in Thailand in September 2008 as a joint venture with telecommunication company True Corporation, AMIVOICE THAI achieved profitability in the year ending March 2014. Nuance Communications, a worldwide enterprise of speech recognition in the United States, was working with True Group, but we won over Nuance to partner with the TRUE group to establish a joint venture. We plan to introduce and develop businesses in overseas countries by adapting them to local conditions. While being careful to sidestep various risks, including those tied to geopolitics and intellectual property theft, we will prioritize development in the Chinese market due to its enormous scope, scale, and rate of growth. As part of these overseas development efforts, we will pursue strategic alliances, both in terms of business and capital; rapidly acquire sales channels and customer bases; secure human resources; and implement other initiatives to rapidly grow businesses to their targeted scales.” CEO Suzuki also commented briefly on Nuance Communications: “Currently, the biggest voice recognition company in the industry is Nuance Communications (NASDAQ: NUAN), which had acquired the speech technologies from Lernout & Hauspie (L&H), a pioneer of speech recognition which went bankrupt in 2001. Apple’s Siri also uses the basic technology of Nuance. Google’s speech recognition technology also includes knowledge of technicians who changed jobs from Nuance.”

Given that AMI has a healthy 8bn yen (US$70m) in net cash in its balance sheet, we asked CEO Suzuki what is his capital allocation plan in reinvesting this cash and whether there are any M&A targets or investments. CEO Suzuki said that they had made an investment in Israel’s AudioBurst (https://www.audioburst.com) in 2016 to acquire a 9.32% stake: “We made a US$2m capital and business alliance with AudioBurst, a technology venture in Israel, in Oct 2016 to acquire a 9.32% stake. AudioBurst indexes, analyzes, and reorganizes a wide range and billions of audio segments from speech, television, radio, internet movies, through cutting-edge artificial intelligence, including deep learning, natural language processing technology and intention interpretation technology etc, to transform them into personalized audio feeds in the optimum and effective time (1 to 3 minutes) for those who listen to it (service users) based on users’ unique listening patterns, interests and preferences. As one of the ‘Beyond ASR’ (super speech recognition) value that exceeds speech recognition, we will introduce and deploy AudioBurst’s services and jointly developed services in Japan and the Asian markets. Our business alliance include AMI acquiring AudioBurst’s technology and solutions in the Japanese and Asian markets (China, Taiwan, Korea, Thailand) as a BtoB business, with exclusive rights in the Japanese market. We will promote technology cooperation between AmiVoice, our voice recognition technology, with AudioBurst’s voice analysis, accumulation and search technology. We had acquired our wholly-owned subsidiary Glamo (https://www.glamo.co.jp) for US$1.52m in Dec 2013 and it now contributes 11.5% of our sales. We had also acquired 100% of Shorthand-Center Tsukuba (https://www.s-c-t.jp) for US$0.38m in Aug 2014 and it contributes around 2.2% of our sales.”

We have been an admirer of Dr. Liu Qingfeng and iFlyTek before China Mobile acquired a 15% stake in them for US$215m (at a market valuation of US$1.43bn then) in Aug 2012, a case study of an overlooked innovator which I shared with the CEO and top management team of a listed tech company in a series of workshop “Uprising! With Bamboo Innovators: Business Model Innovations and TMT Industry Trends” conducted in Singapore, HK and Beijing in 2012. iFlyTek has since jumped nearly 6-fold to $7.14bn.

Like AMI, iFlyTek is the market leader in voice recognition in China with an overwhelming share of 70% in B2B verticals. Founded in 1999 by Dr. Liu Qingfeng, a protégé of Lenovo founder, when he was a second-year PhD student at the reputable University of Science and Technology of China, iFlytek got its start from seed funding from the university and Liu’s university mates. Almost no one believe Liu could succeed then. Today, its voice assistant technology is the Siri of China, and its real-time portable translator puts AI to remarkable use, overcoming dialect, slang, and background noise to translate between Chinese and 33 other languages with high accuracy. It is developing an AI-enabled system to assist courts and judges in reviewing four types of cases, namely murder, theft, telecom fraud and illegal fundraising. Its medical assistance robot helps identify up to 150 diseases and ailments and has passed the written test of China’s national medical licensing examination in November 2017. The medical robot is being applied since March 2018 to hospitals in Anhui province to function as a general practitioner and help doctors treat diseases, making over 4,000 diagnoses. Its AI-system has also been applied in classrooms of primary and middle schools across the country to help teachers better educate students.

iFlytek recently faced two scandals, one relates to its being accused of passing off the work of a human translator as that of its own AI simultaneous machine translation tool during 2018 International Forum on Innovation in Shanghai in September, which iFlytek clarified that they are adopting a “human-machine coupling” approach. Another was an accusation in Oct 2018 that iFlytek took advantage of the government preferential policies to obtain land and instead of using the cheap land for its core business, iFlytek pursued real estate development which is easier to monetize in an expose by state media CCTV.

iFlytek’s first product was a consumer-facing PC software called Changyan 2000 (畅言 or changyan means “speak freely” in English). It allowed users to give voice commands to the PC and also provided an input method that recognized handwritten script. The software package was priced at RMB 2,000–a significant amount of money even now–and advertised in over a dozen provinces in China. It didn’t sell. The other reasons that Changyan 2000 failed included software piracy and high operating expenses associated with the after-sale care of the software. Perhaps the biggest reason was that the consumer market was just not ready for speech recognition tech at the time. After learning from their failures, iFlytek decided to go the B2B route. They had a breakthrough in an initial contract to provide speech recognition and synthesis tech to Huawei’s internal platforms which turned into a long-term relationship. Other large clients followed, which included ZTE and Lenovo. Soon, call centers, voice navigation, and telecommunications services in China all used iFlytek technology. In 2002, iFlytek started to develop AI chips for voice recognition, which are inserted into home appliances and toys. One of the biggest part of its business is the selling educational software that rates a student’s pronunciation of English and even Mandarin Chinese. It can read written text aloud and is like Google Voice in that it can transcribe conversations. In 2004, iFlytek turned profitable and was listed in May 2008.

An estimated 500 million people use iFlytek’s voice input method instead of typing on smartphones and computers. iFlytek’s oral examination assessment technology has helped to assess over 1.7 million students sitting high school English oral exams in over 10 provinces. Chinese ride-hailing app Didi also uses iFlytek’s technology to broadcast orders to drivers. Besides Didi, almost all major APPs including Tencent’s QQ, Alibaba’s AMAP, Youku, Toutiao, Meituan, Ctrip choose to use iFlytek’s voice cloud solution. iFlytek’s voice cloud mainly features the following services: 1) user profiling for precise marketing based on voice input, and 2) new shopping experience based on voice and facial identification. JD currently uses iFlytek’s voice cloud and Big Data marketing platform in daily promotions. iFlytek’s developer platform, called iFlytek Open Platform, provides voice-based AI technologies to over 400,000 developers in various industries such as smart home and mobile Internet. In August 2017, iFlytek launched a voice assistant for drivers called Xiaofeiyu (Little Flying Fish) to place calls, play music, look for directions, and search for restaurants through voice commands.

iFlytek has a lower profit margin than AMI partly because it has a systems integration business (~20-30% of sales) which includes hardware products such as audio & video monitoring, receiving order by project base from central or local government in different areas, such as hospital, subway station, airport. Compared to iFlytek, AMI is weak in the education business (~30% of sales at iFlytek) and the telecom business (~15% of sales) and does not receive support in grants etc from the government; and iFlytek also generates revenue from smart city and public security applications to the public sector (~10% of sales) which is willing to invest heavily. We think AMI can learn from iFlytek’s Big Data business in communication data analysis and big data advertising platform which was introduced in 2015. Compared to iFlytek, healthcare is the biggest contributor to the sales of Nuance Communications at 46.4% given that private insurance companies are at the top of payment to hospitals and doctors in US and various clinical documents are required which is a unique situation to US markets, followed by enterprise 23.8%, mobile 20.5%, and an imaging business 11.2%.

As shared earlier, we think AMI is the most attractive AI voice recognition player in the industry commanding a deserved scarcity premium in valuation with its superior profitability, positive free cashflow and healthy net-cash balance sheet, ahead of Nuance whose operating profitability had plunged 54.2% in the past three years and weighed down by its US$1.84bn in net debt, and iFlyTek whom we are uncomfortable with the risk of its long receivables period and decelerating growth.

CEO Suzuki summed up by emphasizing their corporate philosophy of “GAP” and the importance of having a purpose in life: ““My characteristic and unique approach and method is to first set Goals that seem impossible and crush it to the actionable ‘small’ milestone at the feet, promptly carry out actions to achieve our milestones (Agile) one after another, keeping at it patiently and persistently (Persevere). Then you can achieve impossible goals. I call it GAP. Fostering GAP ability is a reliable means of continuously growing the company. Challenging changes will bring us sustainable growth. In order for ‘Challenge & Change’ to become commonplace as a culture, it is necessary for our staff to wear GAP ability. If this culture is rooted, our sustainable growth will be guaranteed. It is necessary to fail in order to challenge and grasp success. Humans realize happiness through self-fulfilment and growth. Living is the same as working In terms of seeking growth by GAP. It is important to acquire a process that can clarify our purpose in life, why I am here. I do not think everyone thinks the same way, but I think that achievement of purpose is the source of satisfaction of life. Therefore, the setting of a good purpose in life leads to satisfaction of life. The real cause of complaints is not in others but often in ourselves. So it is important to have your own feelings, and if you can always be satisfied, that person’s life will end with satisfaction. In that sense, I think that I have to control my mind and convince him. I set a purpose and go forward for it, but it is meaningless if my mind does not think that it is fine. So, first of all, I think that it is important to have control of the mind.”

“Our corporate philosophy is to be able to return the gratitude to our parents and our society who raised us, and by returning that benefit, we become part of society and become an existence which is indispensable for human beings. ‘Blessing’ is the purpose of corporate activity, and eventually, ‘we live’. The most necessary thing for doing business is a vision, a North Star. It is a mark that you cannot grasp, just a boat that everyone on the ship can row at full speed. Our North Star is Advanced Media to bring prosperity to mankind by communication with machines.”


Intrigued and want to read more? Download this week’s H.E.R.O. HeartWare: Weekly Asia Tech News with brief highlights of the inspiring entrepreneurial stories of tech leaders in Asia whom we have been monitoring over the past decade in our broader watchlist of over 200 listed Asian tech companies and our focused portfolio of 40 HERO Innovators who reveal their problems and successes behind building the company. Inspired by Brandon Stanton’s photo-journalistic project Humans of New York which collects and highlights the street portraits and moving stories of people on the streets around us who were doing things that changed lives and made a difference in the city but often went unnoticed, we have curated a collection of Hear the Heart of the H.E.R.O. stories on our website which we aim to update with refreshing and uplifting new stories weekly. Please check them out and give us your valuable feedback so that we can improve to make them better for you.


It started with rethinking a few questions. Question No. 1: Can the megacap tech elephants still dance? Or is this the better question: Is there an alternative and better way to capture long-term investment returns created by disruptive forces and innovation without chasing the highly popular megacap tech stocks, or falling for the “Next-Big-Thing” trap in overpaying for “growth”, or investing in the fads, me-too imitators, or even in seemingly cutting-edge technologies without the ability to monetize and generate recurring revenue with a sustainable and scalable business model? How can we distinguish between the true innovators and the swarming imitators?

Question No. 2: What if the “non-disruptive” group of reasonably decent quality companies with seemingly “cheap” valuations, a fertile hunting ground of value investors, all need to have their longer-term profitability and balance sheet asset value to be “reset” by deducting a substantial amount of deferred innovation-related expenses and investments every year, given that they are persistently behind the innovation cycle against the disruptors, just to stay “relevant” to survive and compete? Let’s say this invisible expense and deferred liability in the balance sheet that need to be charged amount to 20 to 30% of the revenue (or likely more), its inexactitude is hidden; its wildness lurks and lies in wait. Would you still think that they are still “cheap” in valuation?

Consider the déjà vu case of Kmart vs Walmart in 2000s and now Walmart vs Amazon. It is easy to forget that Kmart spent US$2 billion in 2000/01 in IT and uses the same supplier as Walmart – IBM. The tangible assets and investments are there in the balance sheet and valuations are “cheap”. Yet Kmart failed to replicate to compound value the way it did for Walmart. Now Walmart is investing billions to “catch up” and stay relevant. Key word is “relevancy” to garner valuation.

We now live in an exponential world, and as the Baupost chief and super value investor Seth Klarman warns, disruption is accelerating “exponentially” and value investing has evolved. The paradigm shift to avoid the cheap-gets-cheaper “value traps”, to keep staying curious & humble, and to keep learning & adapting, has never been more critical for value investors. We believe there is a structural break in data in the market’s multi-year appraisal (as opposed to “mean reversion” in valuation over a time period of 2-5 years) on the type of business models, the “exponential innovators”, that can survive, compete and thrive in this challenging exponential world we now live in. Tech-focused innovators with non-linear exponential growth potential are the most relevant multi-year investment trend and opportunity.  

During our value investing journey in the Asian capital jungles over the decade plus, we have observed that many entrepreneurs were successful at the beginning in growing their companies to a certain size, then growth seems to suddenly stall or even reverse, and they become misguided or even corrupted along the way in what they want out of their business and life, which led to a deteriorating tailspin, defeating the buy-and-hold strategy and giving currency to the practice of trading-in-and-out of stocks. On the other hand, there exists an exclusive, under-the-radar, group of innovators who are exceptional market leaders in their respective fields with unique scalable business models run by high-integrity, honorable and far-sighted entrepreneurs with a higher purpose in solving high-value problems for their customers and society whom we call H.E.R.O. – “Honorable. Exponential. Resilient. Organization.”, the inspiration behind the H.E.R.O Innovators Fund, (surprisingly) the only Asian SMID-cap tech-focused fund in the industry.

The H.E.R.O. are governed by a greater purpose in their pursuit to contribute to the welfare of people and guided by an inner compass in choosing and focusing on what they are willing to struggle for and what pains they are willing to endure, in continuing to do their quiet inner innovation work, persevering day in and day out. There’s a tendency for us to think that to be a disruptive innovator or to do anything grand, you have to have a special gift, be someone called for. We think ultimately what really matters is the resolve — to want to do it, bring the future forward by throwing yourself into it, to give your life to that which you consider important. We aim to penetrate into the deeper order that whispers beneath the surface of tech innovations and to stand on the firmer ground of experience hard won through hearing and distilling the essence of the stories of our H.E.R.O. in overcoming their struggles and in understanding the origin of their quiet life of purpose, who opened their hearts to us that resilience and innovation is an art that can be learned, which can embolden all of us with more emotional courage and wisdom to go about our own value investing journey and daily life.

As the only Asian SMID-cap tech-focused listed equities fund in the industry, we believe we are uniquely positioned as a distinctive and alternative investment strategy for both institutional and individual investors who seek to capture long-term investment returns created by disruptive forces and innovation without herding or crowding to invest in the highly popular megacap tech stocks, and also provide capital allocation benefit to investors in building optionality in their overall investment portfolio.

The H.E.R.O. HeartWare Weekly highlights interesting tech news and listed Asian emerging tech innovators with unique and scalable wide-moat business models to keep yourself well-informed about disruptive forces and innovation, new technologies and new business models coming up, and the companies that ride on and benefit from them in some of the most promising areas of the economy in Asia as part of our thought leadership for our ARCHEA Asia HERO Innovators Fund to add value to our clients and the community. Hope you find the weekly report to be useful and insightful. Please give us your candid feedback and harshest criticisms so that we can improve further to serve you better. Besides the BATTSS (Baidu, Alibaba, Tencent, TSMC, Softbank, Samsung), do also tell us which Asian tech entrepreneurs & CEOs whom you admire and respect and why – we will endeavor to do up profiles of them for sharing with the community. Thank you very much and have a beautiful week ahead.

Warm regards,
KB | kb@heroinnovator.com | WhatsApp +65 9695 1860
www.heroinnovator.com

Advertisements

About bambooinnovator
Kee Koon Boon (“KB”) is the co-founder and director of HERO Investment Management which provides specialized fund management and investment advisory services to the ARCHEA Asia HERO Innovators Fund (www.heroinnovator.com), the only Asian SMID-cap tech-focused fund in the industry. KB is an internationally featured investor rooted in the principles of value investing for over a decade as a fund manager and analyst in the Asian capital markets who started his career at a boutique hedge fund in Singapore where he was with the firm since 2002 and was also part of the core investment committee in significantly outperforming the index in the 10-year-plus-old flagship Asian fund. He was also the portfolio manager for Asia-Pacific equities at Korea’s largest mutual fund company. Prior to setting up the H.E.R.O. Innovators Fund, KB was the Chief Investment Officer & CEO of a Singapore Registered Fund Management Company (RFMC) where he is responsible for listed Asian equity investments. KB had taught accounting at the Singapore Management University (SMU) as a faculty member and also pioneered the 15-week course on Accounting Fraud in Asia as an official module at SMU. KB remains grateful and honored to be invited by Singapore’s financial regulator Monetary Authority of Singapore (MAS) to present to their top management team about implementing a world’s first fact-based forward-looking fraud detection framework to bring about benefits for the capital markets in Singapore and for the public and investment community. KB also served the community in sharing his insights in writing articles about value investing and corporate governance in the media that include Business Times, Straits Times, Jakarta Post, Manual of Ideas, Investopedia, TedXWallStreet. He had also presented in top investment, banking and finance conferences in America, Italy, Sydney, Cape Town, HK, China. He has trained CEOs, entrepreneurs, CFOs, management executives in business strategy & business model innovation in Singapore, HK and China.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: