Digital spaces of the future must ensure “complete transparency” regarding the use of artificial intelligence (AI) tools and combat misinformation, said Alexandre Leforestier, founder and CEO of the social network Panodyssey.
The French entrepreneur, who is also in charge of the platform, spoke on Wednesday at the World Forum on Creativity (CWF24) in Bilbao, stating that AI in social media is welcomed as long as it is used “with complete transparency.”

“AI is welcome if we explain to the public and creators what kind of technology we use at Panodyssey. This point of transparency is essential on the web if we are really considering a digital world with ethics,” Leforestier said during a telephone interview with EFE.
Regarding AI’s potential, he added that it should be used to enhance the design of the platform or to improve user experience, for example, through immediate translations of content published in various languages.
“We can use AI, not to build a barrier, create a bubble, and keep people inside this bubble, but quite the opposite,” clarified the creator of Panodyssey, which aims to be the first European digital space without fake news, cyberbullying, or advertising.
Ironically, this project has developed without using generative AI tools (like those underlying models such as OpenAI’s ChatGPT or Google’s Bard), but rather distribution tools intended to “assist creators.”

Lastly, on the issue of misinformation on social networks, Leforestier highlighted his startup’s commitment to account certification for individuals and businesses to hold content creators accountable: “If you publish content in the public space, you have a responsibility, just like on television or in the newspaper. If we want to fight misinformation, the first step is to prove the efficiency of this technology.”
Panodyssey is part of the CREA (Creative Room European Alliance) initiative, a consortium of companies, organizations, and European entities and the EFE Agency.
Meanwhile, Daniel Burgos, professor and director of the Institute for Research, Innovation, and Educational Technologies (UNIR iTED), explained in an interview with EFE the value of AI in social networks for providing a selection criterion amidst the overproduction of information or content.
“AI remains a statistical tool. We produce many news items, channels, and sources, not all of them truthful or well-intentioned, so we can filter through all that,” the professor added.
Regarding regulation, the European Parliament ratified the world’s first law on artificial intelligence on March 13, agreed upon by the community institutions last December, which includes provisions banning this technology when it poses a risk to citizens.
Similarly, the European Commission also demanded that major digital platforms take measures against misinformation campaigns during the European elections, including the need to identify content generated with artificial intelligence and thus manipulated.
These measures are already included in the Digital Services Act, which was passed by the European Union two years ago to enhance transparency on platforms, or in the AI Act, but now Brussels has approved these guidelines to adapt them to the specific context of the upcoming European Parliament elections in June. EFE