May News from Information Security

Welcome to the summer months at Berry! Graduation is past us, as are the busy weeks before and after it. This is why you are just now seeing this newsletter at the midpoint of the month. I knew no one had time to read this while everything else was happening.

On the topic of things that are NOT happening anymore, that would be Spring 2024 Cybersecurity Awareness Training. The course is closed and we will do another course in the fall. If you missed this one, don’t panic, I’m not coming around to berate you or even give you a disappointed shake of my head. I understand that there were many challenges this semester and I’ve heard from a number of you who have not completed it explaining why. It’s OK. We’re moving forward without looking back, however, I assume at some point, the Office of Information Technology will be given either a stick or a carrot to motivate you all to complete the training. More info on that if/when it happens.

What I really want to share in this newsletter is something that many of you are already using, whether you know it or not, and whether you want to or not. Yep, I’m writing about artificial intelligence or AI. Now that it is 2024, I feel we have reached the cusp of a tipping point with AI. ChatGPT, probably the best known AI, also known more precisely as a generative AI, is now in version 4.0. Other leading AIs are also in their third or fourth iterations. There are more topics in this area than we can address, but I want to focus on the security and privacy aspects of these tools (of course, I’m the cybersecurity guy).

First I want to clarify the term generative AI. There are all types of AI, from the generative AI we’ll discuss here to predictive AI, reactive AI, and the (currently) only theoretical general AI. Generative AI is focused on producing output, whether text, image, audio, or video, using trained models (more on that in a bit), usually in response to prompts entered into a computer program or web browser. Different generative AIs will be better at certain tasks based on how their models were trained.

Models are “taught” information, sometimes just general knowledge, but in other cases they may have specific information designed for better output of images, or audio, or on a particular subject, i.e., programming languages or medical knowledge. It is the models that we are naming and referring to when we talk about OpenAI’s ChatGPT or Google’s Gemini, or Meta’s Llama.

All of the generative AI models we have access to now, from ChatGPT to open source models running on standard desktops, are designed to learn and retain data. You can turn this functionality off when using ChatGPT and other commercial models, but most have this on by default, and will put the process to turn this off in a help article they will refer you to when you set up an account. The systems will also warn you that the responses you get may not be accurate and you should use take steps to verify any responses the AI may give you.

Anyone who has worked (or played) with generative AI models know that they can be truly fascinating and will surprise you in their ability to produce interesting output. To produce this output, models must be trained, and when you provide a document or an image to a generative AI, it will retain that information, to some degree, unless you turn this functionality off. What does that mean for those of us who have intentionally or unintentionally bumped into AI functionality in the programs we use?

If you have opened a document in Adobe Acrobat lately, you will notice that in the top right corner, there is the AI Assistant. In Microsoft products like Excel, you can install Copilot as an add-in, offering the ability to analyze your data, create documents faster, and improve your productivity. These tools, and that is what they are, tools, not inherently good or evil, just capable of both, should be used with caution. Private or sensitive data should not be ingested into or manipulated with these tools. The use of generative AI to produce “original” content is generally considered safe, i.e. asking ChatGPT or some other tool to write you a formal business email about a topic or part of a newsletter like I did in February of last year. (It is highly ethical to be transparent about the use of generative AI to create content. More on this topic in another newsletter.) Just remember that the output is not always accurate and sometimes is definitely not what you asked the tool to do.

Even now, there are a number of tasks that many generative AI models cannot complete successfully. The most shocking, and to my mind, fascinating, is the widespread failure by most models to be able to write ten sentences that all end with the word “apple” (or some other word of your choice). Most can get six to eight out of ten, but it generally takes successive attempts to get a model to complete this successfully. This is a task that an elementary school student could do with little effort. Fascinating!

There is one last topic I want to mention as we move toward the end of this fiscal year and the start of the new one. Many of you probably have a project or objective you are working on (or hopefully only planning to work on at this point) that involves information technology in some way. We (OIT) want to help and support you as much as we can. Information technology services are now easy to obtain – all it usually takes is a credit card, but integration and security are topics rarely discussed by vendors. They are providing a service with the primary goal of making money. Security and integration are not first and foremost when talking with a sales rep or using a web interface to set up a new service unless YOU bring it up. Please let OIT partner with you to make these projects successful and secure. Contact the Technical Support Desk, let them know what you are planning, and they will get you in touch with the right department in OIT to assist you.

All Berry students, faculty and staff have MFA enabled on their Berry account, and you should use it in the most secure way via the Microsoft Authenticator app on your smart phone. But don’t stop there! Use the Microsoft Authenticator as your second factor on any site that supports Google Authenticator. Turn MFA/2FA on everywhere you can. Yes, it will take you another few seconds to log in, but your data and account will be safer.

Please continue to report those phishing emails! Avoid using “unsubscribe” links and report spam via the “Report message” button, just like you would a phishing email.

If I’m not covering a topic of cybersecurity you are interested in or concerned about, please let me know. I want to be your first and best resource on cybersecurity information, so tell how I can help and inform you.

If you’re not following Berry OIT on Facebook (@BerryCollegeOIT), Twitter (@berryoit), or Instagram (@berrycollegeoit), you should be, as more information from OIT and specifically Information Security, will be provided using these outlets. If you are not into social media, you can also subscribe to get updates via email. Just use the link available in the right-hand sidebar on the website.

Check out https://support.berry.edu for more information about OIT and the services we provide. You can always check back here for warnings about current phishing emails, confirmations of valid emails you might have a question about, and data breach notifications. There’s also the events calendar where events will be posted, like Cybersecurity Awareness Month.

Food For Thought

Our food for thought this time goes right along with the main topic of the newsletter. This is one of the better explanations of generative AI for the totally uninitiated. If you already use ChatGPT, Copilot, Gemini, Claude, or other models, this information may prove to be too basic for you. The video is 18 minutes long, so you may have to set aside a moment to watch it, but I highly encourage you to do so if you are curious and don’t have any experience with generative AI at all.

Featured Image: Photo by Steve Johnson on Unsplash

Author

(Visited 173 times, 1 visits today)