How to (re)build trust in Web 3.0

How to (re)build trust in Web 3.0

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!



Tiffany Xingyu Wang, Chief Strategy Officer at Spectrum Labs and Co-Founder of the Think Tank Oasis Consortium.


Over 150 years ago, America started to see a fast population shift from farmlands to cities. The supply of packaged food became a rapidly growing industry. A young entrepreneur aspired to build a branded empire with his food company. He wanted his customers to see what they were buying. He became what he was known for, food that consumers can trust — the cleanest and healthiest ketchup. He introduced clear glass bottles and ingredient labeling, which no one was doing and no regulation was imposing. These methods have become standard practices today.


He delivered a safe product amid a Wild West with no national standards for food quality. He inspired trust in consumers through safety and quality. His name is Henry J. Heinz. Today, we buy 650 million bottles of Heinz ketchup every year, globally. The Kraft Heinz Company is a household name with a revenue of USD $26 billion in 2020.


Trust is a multiplier of branding. Brand = Identity x Trust x People who Trust. Heinz made the cleanest and healthiest products and wanted to be known for it — the brand identity; he used his savvy marketing mind to advertise on billboards, streetcars, and railroad boxcars with the signature green letters on a white background — reaching the consumers; and ultimately what set him apart is the trust he instilled in consumers.


For Heinz, trust was won through safety, over a century ago. The notion of trust is ever-evolving. It has evolved for Heinz. Today, Kraft Heinz has a goal of using 100% recyclable packaging by 2025. Trust broadened its definition to include sustainability.


In Web2.0, we lost trust


Fast forward to 2021. We live in a digital society powered by technologies and connected through devices. The tissue that connects us and holds our lives together is called Web2.0, the social web, by technologists. It is dominated by a few ultra-large digital platforms which own, process and analyze users’ data. The web is characterized by the social tissue the platforms created and the rise of user-generated content. These two features allow brands to grow at an unprecedented speed, or as some call it, virally. In the equation of brand = identity x trust x people, brands optimize the reach. They leverage users’ data to zoom in on accurate targets and use social tissue to reach audiences at lightning speed. However, brands have lost their way when it comes to trust.


As Web2.0 mined our interests, activity, and online footprints, a focus on growth over trust opened the door to cybercrime, online harassment, and unconscious bias built into the programming. There is a cyber-attack every 39 seconds, more than 40% of U.S. internet users have reported online harassment, and AI recognizes white-skinned men 34% better than dark-skinned women. So, we simply lose our trust in the web where our data is being breached, our safety is threatened, and the technology discriminates against human beings.


Trust, as a multiplier, was diminished to a negative. Social media, the tissue that connects us, is becoming Pandora’s box for dis/misinformation; gaming platforms, our gateway into a metaverse, have witnessed the overflow of hate speech, racism, and bullying; and dating platforms are wary of human trafficking and child sexual abuse materials. When a brand advertises next to such unsafe content, it is twice as likely that consumers will abandon their purchase due to distrust.


How to (re)Build trust in Web3.0


Trust in a digital society needs to be refined and redefined. In the march toward digital transformation, brand and digital platform leaders are at the forefront to reestablish trust.


As Web3.0 is upon us, a framework to restore trust needs to be aligned with the key underlying movements that shape the new web. They are respectively the rising internet of things (IoT), the forthcoming metaverse powered by 3D graphics, and the semantic web, all assisted with AI in all decision making.


We, humans, are migrating fast into the web. Yet, we are not ready. To ride each wave, we require guardrails to protect our personal data as the web is distributed into the edge, keep us safe as we move into the metaverse, and create a semantic web that represents individuality without bias. A framework to ethicalize the movements and to transition us into the Web3.0 looks like this:


There are three pillars in which Web3.0 can redefine and rebuild trust: privacy, safety, and representation. Each pillar sets a guardrail for the characteristics of Web3.0. Each of the following sections unpacks methods and approaches for brands and digital platforms to (re)build trust in the era of Web3.0.


Privacy in the rise of the internet of things


By 2025, it is estimated that there will be over 75 billion devices online. The tissue that connects us is beyond the social networks — with nodes on your smartphones and desktops. This tissue web extends to smart TVs, fridges, delivery parcels, and basically any device with a chip. The perimeter of devices is so distributed that the network is simply decentralized. Each device can store, process, or transmit people’s data. While IoT drives reach and redefines convenience for all consumers, it opens vulnerabilities for more data breaches. Brands and platforms that can protect data rights and dignity will instill trust among their users and consumers.


The current methods of protecting data and assets at the enterprise level in a centralized manner will no longer suffice. This cat-and-mouse race will be a losing one for privacy and security leaders if devices are distributed and yet protection is centralized. We need to shift the mindset and methods from protecting just data and assets to protecting people. Putting people at the center of operations of data protection answers to the inevitable rise of  IoT. There are three emerging privacy-preserving trends that ride on the IoT waves.


The first avenue is automated and orchestrated data control and consent management. Baking consent and data control into the design of data operations for any brand and platform allows users to restore the right to their data, from the moment data is captured, processed, and analyzed until the point when a user requires their data to be removed. Recently, a marketplace for artists and creators, Patreon, partnered with Ketch to make seamless privacy protection and  return data right back to users, seamlessly. When you land on the front page of Patreon, you will see a beautifully designed privacy management box pop up and request your privacy preferences in plain language without forcing users’ data concession, one way or another. This privacy feature is on-brand with Patreon’s community-first identity. It instills trust in users and adds to the brand equity of the platform.


The second method is differential privacy, a mathematical cryptography that ensures no information is derivable more than the information users intend to share. It is well documented that when an entity owns a mass of people’s data, the entity can derive unintended inferences. The method is adopted by the U.S. Census, Apple for phone data, and Google for Chrome data. Wider adoption is yet to come to put people at the center and restore trust with users and consumers.


The last method is building the IoT network on blockchain. An upstart, Nodle, builds crowd connectivity through a vast network of devices and adopts blockchain to ensure data privacy and security at the device level. The hockey stick growth since its inception to becoming the largest IoT network within two years speaks to the unfulfilled demand of users and consumers for data privacy in communications. Adoption of any of these methods can provide a path to restore trust by respecting data rights and the data dignity of users and consumers.


Safety in the forthcoming metaverse


3D graphics, another characteristic of Web3.0, renders our physical worlds in a metaverse. We livestream our lives into the virtual world. We learn, play, date, and socialize live through 5G-powered AR and VR, all just as Ernest Cline depicted in Ready Player One. While the metaverse offers a new territory of engagement for digital platforms and brands, it has no governance or guardrails in the virtual land. Those who succeed in making community safety a competitive advantage will acquire, retain, and engage users and consumers in a digitally sustainable way.


Parler was taken off distribution channels because it chose not to have any safety guardrails. Gaming platforms often end up in media headlines as over 57% of gamers have reported being trolled, and 47% have been harassed. Brands are 2.7 times more likely to lose their consumers along the purchase journey if their advertising is adjacent to unsafe content. This safety issue is paramount. Digital platforms and brands are starting to take action and build out trust and safety arms to resolve the issue. We are only at the beginning of this march.


Two aspects are foundational for the success of building community and brand safety. First, governance must come before moderation. The common pitfall for digital platforms and brands is investing in content moderation, with human moderators and technologies, before reviewing and designing community policies. Community policies act as codes of conduct for community participants — they define acceptable behaviors, guardrails, and penalties if users breach digital safety guidance. Today, teams who design policies are often siloed from the brand and marketing teams. The policies are also hidden at the bottom of landing pages and often sit two to three clicks away from users. In other words, the governance is not by design tied to the brand identity and is not an integral part of the user experience. This is a tremendous missed opportunity for leaders to drive the trust factor of branding by translating brand beliefs and identities into the community policies. Recently, I created a think tank, Oasis Consortium, teaming with business leaders from major gaming, dating, social media, live-streaming platforms, ad agencies, and publishers to create a framework for existing platforms to benchmark against and for emerging platforms to jumpstart with. O.A.S.I.S stands for Openness, Accountability, Security, Innovation, and Sustainability. The low-hanging fruit and yet the most critical piece is Openness, a principle to invite digital platforms to design and define their own policies in the expression of their brand identity and user base.


Then comes technology. The content moderation industry is rising at an annual growth rate of over 10% globally toward a $10 billion-plus market. There are three key approaches: human moderation, keyword-based techniques, and contextual AI. Human moderation puts moderators at the exposure to traumatizing content and the horrifying PTSD stories have been alarming and prove the need for technology intervention. Keyword technology is simply identifying and flagging toxic words. It provides an accuracy level worse than flipping a coin. Contextual AI is an emerging strategy using contextual data to identify if a conversation should be flagged. The difference can be explained with an example: The F-word is profanity. “This is F-ing awesome” is actually a positive sentiment. However, if the sentence is in conjunction with child sexual abuse materials (CSAM), human trafficking, white supremacy, hate speech, etc., then it can be against community policies. The best market practice today is adopting contextual AI while keeping humans in the loop to drive accuracy to above 90% and to reduce human moderators’ exposure to online toxicity.


Representation in the semantic web


AI is only as good as data gets. User-generated content (UGC) is a major product of Web2.0. It constitutes all traces that users leave on the internet, as light as reactions, comments, chats, and short posts, and as expansive as blogs and short- to long-format audio and video creations. Feeds of UGC have become a major source of data for AI to base decisions on.


In 2016, Microsoft unveiled an AI-powered chatbot named Tay and put it on Twitter to experiment with its conversational understanding. Within less than 24 hours, Twitter UGC turned Tay into a racist. Crap in, crap out. Racism in, racism out. Recent research stated that commercially available facial recognition engines by major corporations detect light-skinned males strikingly better than dark-skinned females because the AI was trained with images overwhelmingly of white males compared to other ethnicity and gender groups.


There are methods well documented to rectify the course. However, to fundamentally reverse the course and un-bias the web, we will have to bake representation into content creation. This, coupled with equitable hiring of AI research, engineering, and product leaders, can instill representation by design from upstream content creation through AI training and decision-making.


Photography insiders have long known that color film was created solely with white people in mind. The rise of media and entertainment brands like Macro, which focuses on amplifying underrepresented creatives, will diversify text, audio, and video content and allow the semantic web to be more inclusive. Representation by design from the outset of content creation also presents untapped opportunities for businesses. In Hollywood alone, studies show that lack of representation leaves over $10 billion a year on the table. Inclusive content creation through diverse talent on- and off-screen allows the representation and engagement of a broader audience base, including people of color. As content-streaming is becoming the mainstream, entertainment is distributed all over the world.


With the next half of the world coming online, the imperative of empowering inclusive content creation will offer a competitive advantage for not only studios but all digital platforms which desire to acquire a global user base.


Privacy by design, safety by design, and representation by design will ethicalize Web3.0. Leaders who adopt these pillars to surf the technological waves will build trust and therefore drive sustainable growth.


Twenty years ago there was a startup, and its mission was to put software on the cloud. Can you imagine a startup at that time asking its customers to put their data on the cloud? The startup’s consistent answer to its customers was trust. The first pillar for the company was cybersecurity, because first, they had to secure the most precious asset its customers entrusted it with, data. Over the course of 20 years, the notion of trust has expanded from cybersecurity to quality to customer experience. And recently, it incorporated ethics, diversity, and inclusion. This company is called Salesforce. It is valued at over $200 billion and still enjoys rapid growth year on year. The old school success story of Heinz and the modern-day tale of Salesforce drive home the value of trust, and the critical need to rebuild trust in the internet before it is too far gone.


Tiffany Xingyu Wang is the Chief Strategy Officer at Spectrum Labs, and has worked with leading digital platforms to safeguard over 1 billion users online.

VentureBeat


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member