Magazine
11:38 1 August 2023
Post by: WBJ

On AI foibles and need to study its ethics

Phaedra Boinodiris, IBM Consulting's global leader for Trustworthy AI and author of a recently published book AI for the Rest of Us sat down with Warsaw Business Journal at Infoshare Conference 2023 in Gdańsk to talk about the need for an industry-wide discussion on AI ethics and how to make the elu

On AI foibles and need to study its ethics
Source: Pexels

Interview by Beata Socha

What is your take on how children are adapting to current levels of technology and the pace of changing technology? Are we as a species ready to take on that much technology early on?


P.B.: Certainly I would say that this generation are digital natives. They’re extremely comfortable with the technology. I think that it is incumbent upon us that we explain to them how to be critical thinkers and critical users of the technology . . . we need to train people to be thinking about things like what data was used for this model to make this particular prediction. How much better does this model perform when compared to a human?  What’s the level of accuracy of this model? Who’s accountable for this model? Were the model’s outputs audited for things like bias? If we don’t train not just the younger generation, but ourselves to ask these kinds of questions we are at a distinct disadvantage.

People are looking at AI as if it is a crystal ball that knows everything. How do you make people realize that it’s just a language model, it doesn’t understand what it’s saying?

I’m still trying to figure that out, honestly. I’ve been leaning into comics to maybe use humor as a way to teach. Education is crucial, but also those who are deploying and developing AI, it is incumbent upon them to say, “You are using AI right now. This is a large language model.” Tell people how this AI works and how it comes up with its predictions. Rendering that kind of information interpretable and transparent to people is really important.

Do you think there is still room for an AI ethics discussion when there is an explosive AI arms race in this industry?

I think there must be.


How do you convince them to sit down and discuss AI ethics?

There’s two things that I say. One is, you say hey, there’s the risk of litigation of lawsuits and brand decay. On the flip side of that is, 80% of investments in AI gets stuck in proof of concept. It never sees the light of day because either the investments aren’t directly tied to business strategy or people don’t trust the models. And you don’t want to lose your money, do you? So you want to make sure you’re building it in such a way that you earn people’s trust. So it’s positive reinforcement and negative reinforcement.

There have been claims of privacy violations by LLM and other generative AI technologies. How can technology companies be held accountable? Who’s responsible for holding it accountable? And for open source AI, is there room for accountability when everyone has access to AI?

I think this is one of the big outstanding questions. And what remains to be seen. I know there are a lot of different governments that are taking a look at what the regulatory environment framework should look like for something like this. There are a lot of discussions for things like custom curated data sets, where the data is scrubbed for copyright and IP infringement to make sure that the content being used to train the model is on the up and up.

There are still questions beyond IP and copyright, like hallucinations, when the outputs are blatantly false or biased. Because even though the training data can be carefully applied, that doesn’t mean that the output couldn’t be all kinds of biased. It definitely requires caution. It’s why I mentioned the work with the ontologies and knowledge graphs. If you pair something like a knowledge graph with a LLM you can go to something that offers content grounding with data lineage and prominence and then use the LLM to provide variations on that answer. And this way you can say: here is where I got the information, from here, from this source. This is really key. Paired together, now you’re getting the best of both worlds. You can do some really interesting things. If explainability is important to an organization you’re gonna need to offer some level of content grounding with a formalized knowledge graph.

How do we guarantee equality of access to AI?

I usually mention the Maori as an inspiring story for a few different reasons. First of all, you have an indigenous culture that is owning the tools and the work that they’ve developed and they are only offering those organizations that reach out to them represent indigenous groups with written consent. You need our permission in order to use this content. Their approach to developing the model . . . it matches the cultural values of the Maori people. There is a big emphasis on attribution, on data lineage and provenance. They even talk about things like bias or even values changing over time. They work from this whole premise that you have to nurture these models as they change over time.

LLMs are already out there. And they can’t unlearn something they have been trained on. How do you tweak them when e.g. social values change?

If we have tools that make these ontologies easily buildable and maintainable, then we can have totally different roles for people like librarians. Owning the curation of these knowledge graphs where they will be able to curate what is the meaning behind words. In their own cultural context. And you can use things like natural language processing to build these ontologies pretty quickly, but honestly, it’s more art than tactics. You have to think about a work in your own lexicon, how ideas intersect. How do you understand the word, depending on who is speaking? If I say something it has a different context than if the president is speaking.  

Thank you!

Infoshare Conference 2023 - a festival of tech-driven community 

Over 6000 participants, 500 startups and corporations, and nearly 200 experts from all over the world shared their expertise on 9 stages at Infoshare. The largest tech conference in CEE took place in Gdańsk. Check out the overview of the event at www.infoshare.pl/conference


ai ethics

More News

lifestyle

LifeStyle
19 days ago

British Scientists Say that the Feeling of Happiness Can Be Learned

LifeStyle
3 months ago

'We work hard, we achieved what we have on our own, we are strong women, and we have our voice': Joanna Krupa

LifeStyle
3 months ago

Magdalena Lamparska: 'Viewers have come to enjoy watching movies at home'

LifeStyle
3 months ago

8th edition of the 4 Design Days in Katowice 25–28 January Everlasting design. Timeless architecture.

Book of Lists

Book of Lists
3 years ago

The largest Polish companies under the Book of Lists microscope! Book of Lists 2020/2021 certificates have been awarded.

Book of Lists
4 years ago

25th jubilee edition of Book of Lists – project start