Articles

Lukas Madl

People in the age of AI

The workshop focused on the question: How is AI changing our humanity? Interactive group work sessions discussed real-life scenarios – from ChatGPT in higher education to AI in medicine and grief counseling. A high-profile panel featuring Robert Trappl, Jesse de Pagter, Stefan Hupe, and Lukas Madl deepened the discussion around AI and humanity: What values matter? Where are the opportunities, where are the risks? The goal was not only to impart knowledge, but also to promote dialogue — and to become a little more human.

View details
Child protection in the Internet age

Digital services are now an integral part of children’s lives, but their protection often falls by the wayside. The British “Children’s Code” sets new standards in this area by requiring providers to design their digital offerings in a way that serves the best interests of children. This includes age-appropriate privacy settings, transparent communication, and ethical design. The presentation highlighted the legal basis, outlined specific implementation strategies, and made it clear that data protection for children is not only a legal obligation but also a moral one. Companies are called upon to take responsibility — not only to comply with laws but also to build trust and protect children.

View details
AI in administration: opportunities and risks

Public administration is also undergoing profound changes as a result of AI. The presentation highlighted how AI can be used effectively to optimize processes, reduce the workload on employees, and serve citizens. At the same time, the risks were highlighted: from algorithmic discrimination to workplace changes and ethical dilemmas. AI is like fire: it can warm you, but it can also burn you. That is why clear values and principles as well as an ethical compass are needed to shape technology in the interests of people. The presentation called for the responsible use of AI in public administration. „We would like to express our sincere thanks for the highly professional and informative presentation. The feedback from everyone involved was overwhelmingly positive—any reservations were dispelled and enthusiasm for new products was sparked. We wish you continued success — see you again soon.😊“ René Gneist STADir., M.A., 1st Chairman of the ARGE of City Administrators of Lower Austria

View details
Man or Machine?

The discussion revolved around the tension between human intelligence and artificial intelligence (AI). Key questions included: Will machines soon be better than humans? What abilities will remain exclusive to humans? And how will AI change our self-image, our education, and our working world? The discussion covered whether we overestimate or underestimate AI, what ethical and regulatory challenges exist—for example, in connection with the AI Act—and how our trust in data-based systems is developing. The impact on schools, learning, and truth in the age of fake news and AI-generated art was also addressed. The discussion was broad in scope, covering philosophical, ethical, and social issues, and ultimately posed the question: What makes us human? The panel discussion featured: Dr. Robert König, philosopher and ethicist at the University of Vienna, science ambassador for the OEAD Mag. Lukas Madl, founder and CEO of innovethic, expert in responsible innovation and trustworthy AI Michael Volpert, founder of cup2gether, partner and advisor at structr Stefan Hupe, mentor in the Thinker Circle Students from the 8AB ethics class at BG Bachgasse The discussion was moderated by Mag.a Gabi Holzer

View details
Why (only) ethical AI leads to success

In his presentation “Why (only) ethical AI leads to success,” Lukas Madl emphasized that although artificial intelligence is powerful, it must be guided by human ethical responsibility to ensure that it benefits society. Using the example of the rise and fall of Theranos, he illustrated the dangers that arise when ethical principles are disregarded in innovative technologies with a major impact. He outlined three key pitfalls — unfulfilled promises, underestimated risks, and unresolved conflicts between benefits and harms — and argued that ethical AI is not about machines acting morally, but about humans ensuring responsible development and deployment. Using risk assessments, stakeholder impact analyses, and alignment with frameworks such as the EU AI Act, Madl showed how ethical AI can create a win-win situation in which innovation, trust, and profitability coexist.

View details
Winning with ethics: Why ethical AI leads to success

In the long term, AI can only be successful and gain social acceptance if it is developed and used in an ethically responsible manner. Lukas Madl illustrated the risks of “high-impact innovations” when ethical principles are neglected, using the story of Theranos and current examples such as AI-supported application tools and surveillance technologies. He emphasized that AI systems deeply influence human decision-making processes and must therefore be designed with particular care—in terms of fairness, transparency, and human dignity. At the end of his presentation, he introduced a structured risk assessment process that helps companies implement AI responsibly—as an investment in trust, efficiency, and sustainable success.

View details
Between relief and dehumanization – designing AI in nursing care ethically

What are the ethical challenges and opportunities associated with the use of AI in nursing care? Given the nursing crisis and staff overload, AI can offer valuable relief—for example, through intelligent diagnostic systems that take over routine tasks, thereby freeing up more time for human interaction. But technology is not an end in itself: it must be designed to serve people – both those in need of care and those providing it. Madl emphasizes that AI systems are only ethically acceptable if they do not violate the fundamental values of care and do not replace, but rather reinforce, uniquely human aspects such as compassion, presence, and empathy. A careful risk and impact analysis is essential in order to use AI in care responsibly and in the spirit of a “win-win zone.”

View details
Digital medicine under scrutiny

What is hype and what can algorithms really achieve in medicine? We discussed these questions in a broad panel of experts from various disciplines — from neurology and oncology to ethics. We talked about the opportunities, limitations, and ethical challenges of digital technologies in healthcare.

View details
The EU AI Act: Safety, transparency, and responsibility in AI

In my presentation, I introduced the EU AI Act as a key instrument for systematically addressing the risks of artificial intelligence. It became clear that AI poses not only technical challenges, but also profound social and ethical challenges — for example, through discrimination, lack of transparency, or interference with fundamental rights. The risk-based approach of the law distinguishes between minimal, limited, high, and unacceptable risk and sets different requirements depending on the classification. I particularly emphasized that AI is a socio-technical system in which humans are not only users but also affected parties — and that standards, transparency, and responsible data governance are crucial to ensuring trust and security.

View details