Vice-Chancellor and fellow academics share insights into AI

The four AI panellists seated against a purple backdrop with the university logo on the wall. 
L-R: Dr Martin Sykora, Dr Georgina Cosma, Dr Saul Albert, and Professor Nick Jennings.

On 5 June, the Vice-Chancellor, Professor Nick Jennings CB FREng FRS was joined by three university academics to discuss artificial intelligence (AI).

Alongside the Professor Jennings, Dr Saul Albert, Dr Georgina Cosma and Dr Martin Sykora each shared insights into their research and the use of artificial intelligence, answering questions sent in by alumni.

The panel discussed how AI could impact the future of education, the workplace, and also had specific discussions about certain industries. They talked about taking a balanced view on AI in order to avoid hyperbole, and the need to have a diverse range of perspectives and expertise in order for AI to achieve its full potential.

The event sought to offer alumni the opportunity to tune in to hear discussions on this hot topic, which is currently dominating the news.

The experts had a range of insights to offer, and if you would like to access the recording of the discussion, we can provide this on request. Please email us on alumni@lboro.ac.uk.

Thank you to our panellists and to everyone who attended the event.

The academics have shared links to further pieces of research and other things for you to read following the event. A selection of resources are linked and referenced below:

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. . Also available via . 

Some thoughts on accountability in AI: Aceves, P. (2023, 29th May). ‘I do not think ethical surveillance can exist’: .

: Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

: Beioley K. and Murgia M. (2023, May 3rd) UK competition watchdog launches review of AI market. (Financial Times login required).

: Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at Work (No. w31161). National Bureau of Economic Research.

: Hanley, H.W. & Durumeric, Z. (2023). Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites. arXiv preprint arXiv:2305.09820.

: He, B., Ahamad, M., & Kumar, S. (2023, April). Reinforcement learning-based counter-misinformation response generation: a case study of COVID-19 vaccine misinformation. In Proceedings of the ACM Web Conference 2023 (pp. 2698-2709).

LLM/ChatGPT for ad moderation (with minority languages): Kayser-Brill, N. (2023). Is Big Social ever going to be honest? .

: Marcus G. (2023) GPT-4’s successes, and GPT-4’s failures, The Road to AI we can Trust.

: Reuters (2023, May 19th). G7 leaders confirm need for governance of generative AI technology.

An IS research community perspective on the role of generative AI in academic scholarship: Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus Effect of Generative AI: Charting the Path for Responsible Conduct of Scholarly Activities in Information Systems. .

: Centre for AI Safety (2023) Statement on AI Risk.