← All Webinars | L.A.B.S. #3
Getting Unstuck: Ask-Me-Anything on AI in Test Development
How to use AI in test development, legal considerations, and generative AI applications.
Unsure how to integrate AI into your test development to generate exam items? Still trying to figure out the legal considerations? Curious about the alternative content generative AI can produce?
Our host Brodie Wise EVP of Business Development & Marketing is joined by:
-Pat Ward (President & CEO, ITS)
-Marc Weinstein (Attorney, Marc J. Weinstein PLLC)
-Bridget Herd (Sr. VP Program Management, Pearson VUE)
-Liberty Munson (Dir. Psychometrics, Microsoft)
They tackle your toughest questions on AI in test development and provide thought-provoking insights. By watching, you'll acquire new information, find solutions to challenges, and better understand the potential of generative AI.
+
Interested in partnering on a webinar? Share your ideas at webinars@testsys.com.
1
00:00:08.200 --> 00:00:12.420
Hi everyone! Welcome, welcome! We're gonna get started in a few seconds.
2
00:00:13.780 --> 00:00:16.490
We're watching people come in. There's a lot of people gonna be on.
3
00:00:17.900 --> 00:00:19.090
Thank you for joining today!
4
00:00:22.800 --> 00:00:26.170
Okay, this is recorded, so we're gonna kick it off. Hi everyone!
5
00:00:26.170 --> 00:00:30.010
Thank you so much for joining this webinar today. We are excited!
6
00:00:30.550 --> 00:00:31.570
My name is Brodie Wise,
7
00:00:31.710 --> 00:00:35.250
I'm the EVP of Business Development and Marketing here at ITS.
8
00:00:35.320 --> 00:00:39.260
I'll be your host and moderator today. Once again, thank you for joining.
9
00:00:39.440 --> 00:00:40.660
We have an incredible panel.
10
00:00:41.090 --> 00:00:43.700
they'll answer a lot of questions about AI and test development,
11
00:00:43.700 --> 00:00:45.340
and if there's other things that get brought up,
12
00:00:45.470 --> 00:00:48.180
we're gonna probably jump into those too, depends on time.
13
00:00:49.120 --> 00:00:53.420
The conversation really began earlier this year at ATP.
14
00:00:53.520 --> 00:00:56.980
We had a lot of great discussions- Chat GPT came out,
15
00:00:56.980 --> 00:00:59.020
people are trying to figure things out,
16
00:00:59.360 --> 00:01:03.140
and the purpose of today is to ask anything and,
17
00:01:03.140 --> 00:01:06.500
but we're not starting with the baseline things that we started with.
18
00:01:06.510 --> 00:01:10.180
We're gonna get into the more in-depth discussions and the things that you have
19
00:01:10.180 --> 00:01:12.580
real questions about. So, that's where you come in!
20
00:01:13.050 --> 00:01:17.500
It's an ask me anything format. What you'll see in the bottom of the,
21
00:01:18.320 --> 00:01:22.140
the user interface for Zoom, there's a Q&A feature.
22
00:01:22.370 --> 00:01:26.020
What I need you to do is type any type of question that you have.
23
00:01:26.440 --> 00:01:29.580
Please submit those questions there. And then there's also a,
24
00:01:29.580 --> 00:01:33.860
you'll have the ability to rate those questions like tag 'em so they get higher
25
00:01:34.020 --> 00:01:37.140
rated. So then we focus on those questions first, second, and third.
26
00:01:38.940 --> 00:01:43.440
The webinar will be recorded. Then after it's done in like a day or two, it'll,
27
00:01:43.440 --> 00:01:46.840
you'll get a link for the webinar and you can watch post-webinar.
28
00:01:47.060 --> 00:01:51.360
So in case you miss something or need to hop off. At the end of the survey
29
00:01:52.050 --> 00:01:55.800
there, you will get, don't close too quickly. We just will send a,
30
00:01:55.820 --> 00:01:59.240
at the end of the Zoom meeting, you'll get a survey and just,
31
00:01:59.420 --> 00:02:02.640
we want to get your feedback on suggestion for additional topics.
32
00:02:02.640 --> 00:02:06.400
We're really excited about this one. Well, without further ado,
33
00:02:06.460 --> 00:02:10.560
let me start talking about the all-star panelist to talk about AI and test
34
00:02:10.560 --> 00:02:13.420
development. First, let me begin with Bridget Herd.
35
00:02:13.830 --> 00:02:18.140
She's the SVP of Program Management and Project Management Office at Pearson
36
00:02:18.140 --> 00:02:18.973
VUE.
37
00:02:19.100 --> 00:02:23.470
Bridget is a dynamic and accomplished individual with expertise in diverse
38
00:02:23.470 --> 00:02:26.150
fields, known for exceptional leadership,
39
00:02:26.480 --> 00:02:31.190
creativity and unwavering commitment to making a positive impact on
40
00:02:31.190 --> 00:02:34.900
society. Then we have Liberty Munson,
41
00:02:35.480 --> 00:02:38.340
who is the leading figure in the certification industry,
42
00:02:38.810 --> 00:02:42.540
serving as the Director of Psychometrics for Microsoft Worldwide Learning,
43
00:02:43.390 --> 00:02:47.860
overseeing development of valid and reliable assessments for the Microsoft
44
00:02:47.860 --> 00:02:49.460
Technical Certification program,
45
00:02:50.230 --> 00:02:54.020
while champion innovative approaches to assessment, design,
46
00:02:54.200 --> 00:02:58.840
and the future certification. Next we have Marc Weinstein.
47
00:02:59.610 --> 00:03:04.060
Marc has practiced law since 1998 and is renowned
48
00:03:04.440 --> 00:03:09.380
as a sought after speaker and writer about legal issues impacting the
49
00:03:09.380 --> 00:03:10.300
testing industry.
50
00:03:11.420 --> 00:03:15.960
Recognized for his contributions to publications and conferences,
51
00:03:16.580 --> 00:03:21.200
and provides legal services to organizations that use to make
52
00:03:21.200 --> 00:03:24.080
important decisions about people. In addition,
53
00:03:24.140 --> 00:03:28.400
he serves as an Executive at Caveon. And last but not least,
54
00:03:29.020 --> 00:03:33.120
we have Pat Ward, President and CEO at Internet Testing Systems.
55
00:03:33.900 --> 00:03:38.400
He is a leader in the assessment industry of blending technology expertise and
56
00:03:38.400 --> 00:03:42.400
the innovative assessment concepts. With over 30 years
57
00:03:42.930 --> 00:03:46.840
experience and recognized for significant contribution to the field.
58
00:03:48.020 --> 00:03:52.000
So with that, with all these people, here's the best part about those intros.
59
00:03:52.010 --> 00:03:55.400
Those intros were mainly written by Dhat GPT,
60
00:03:55.400 --> 00:03:58.400
and just modified a little in some cases. Can you believe it?
61
00:03:58.590 --> 00:04:01.480
They sound so great! I, I mean, I didn't have to do much.
62
00:04:02.780 --> 00:04:06.120
So now we're gonna start jumping in into questions,
63
00:04:06.420 --> 00:04:10.830
and this is where I think they're gonna, um, well,
64
00:04:10.830 --> 00:04:14.230
we'll just jump right into the first one. So the first one's, um,
65
00:04:14.370 --> 00:04:18.470
by Kelly Britton, and it says, let me get it, get at the top
66
00:04:18.470 --> 00:04:19.750
it's jumping around pretty quickly.
67
00:04:20.370 --> 00:04:23.710
How many of you on this webinar are already using Chat
68
00:04:23.950 --> 00:04:28.510
GPT or AI to generate test items for your high stakes
69
00:04:28.510 --> 00:04:32.350
operational, uh, high stakes operational exams? You know, with that,
70
00:04:32.450 --> 00:04:35.070
I'm gonna turn it over to you, Bridget, to kick it off.
71
00:04:36.540 --> 00:04:37.830
Uh oh, I didn't expect that.
72
00:04:38.070 --> 00:04:41.150
I think we have a polling question that we're gonna actually try to get at,
73
00:04:41.150 --> 00:04:41.983
right?
74
00:04:42.060 --> 00:04:44.190
Yeah, we will do the polling question in a few minutes, have a
75
00:04:44.190 --> 00:04:47.430
Um, so I'm, I'm glad that you did, um,
76
00:04:47.660 --> 00:04:50.190
mention that our bios were written by Chat GPT,
77
00:04:50.190 --> 00:04:54.510
cause I don't think anybody recognized me from that Chat. GPT. Um, I,
78
00:04:54.670 --> 00:04:56.830
I would say when we think about this question, um,
79
00:04:57.370 --> 00:05:00.190
I'm gonna answer it from the perspective of what we're seeing with our clients.
80
00:05:00.850 --> 00:05:04.440
Um, we've got, I would say, categories of clients. I'm afraid,
81
00:05:04.720 --> 00:05:05.560
I don't know where to start.
82
00:05:06.020 --> 00:05:10.320
I'm just gonna keep listening and gathering information. Um, I'm gonna tip, uh,
83
00:05:10.320 --> 00:05:14.800
dip my toe in it, but I'm not gonna go too fast because, I'm still afraid,
84
00:05:15.260 --> 00:05:17.960
and this is when Marc is gonna be so important on this call,
85
00:05:18.320 --> 00:05:21.440
I don't really understand the legal implications or how do I, uh,
86
00:05:21.960 --> 00:05:23.880
minimize my risk if I jump all in.
87
00:05:24.220 --> 00:05:26.000
And then the other ones that are just full bore,
88
00:05:26.020 --> 00:05:27.960
I'm jumping in because that's the way they learn.
89
00:05:27.960 --> 00:05:31.880
That's how they want to kind of diagnose and understand what's going on,
90
00:05:31.900 --> 00:05:36.440
etcetera. So I'm gonna frame it as the categories that I see people in today,
91
00:05:36.460 --> 00:05:41.000
and I think these conversations are intended to help folks further move them
92
00:05:41.270 --> 00:05:44.320
into a category they think they should be in. Even though I think right now,
93
00:05:44.320 --> 00:05:45.480
people aren't sure where they should be.
94
00:05:47.150 --> 00:05:49.100
Thank you, Bridget. What about you, Liberty
95
00:05:49.350 --> 00:05:53.860
since you're the the main test program person here on the webinar?
96
00:05:55.340 --> 00:05:59.120
Well, I think we should approach this with caution, with our high stakes, um,
97
00:05:59.690 --> 00:06:02.080
exams for the same, for a lot of reasons.
98
00:06:02.080 --> 00:06:04.920
I think a lot of the concerns that I imagine all of us have.
99
00:06:05.540 --> 00:06:09.040
But Microsoft is all in with AI, as you can imagine.
100
00:06:09.420 --> 00:06:12.960
So we've been told to figure out how we can use AI to change our,
101
00:06:13.300 --> 00:06:16.160
the work that we do and make us more efficient and more effective.
102
00:06:16.160 --> 00:06:20.240
And so we are starting to dabble in this for, um,
103
00:06:20.240 --> 00:06:23.360
some of our lower stake stuff, like the Knowledge Checks we have on Learn,
104
00:06:23.470 --> 00:06:26.720
some of the practice assessment content that we create and things like that.
105
00:06:27.100 --> 00:06:32.000
But I'm definitely using those kinds of lower stakes to learn because
106
00:06:32.100 --> 00:06:36.080
we have very strict guidelines around the look and feel of a Microsoft item.
107
00:06:36.420 --> 00:06:37.720
And just putting in a,
108
00:06:37.720 --> 00:06:41.520
like when we were just playing around with it initially with some, like,
109
00:06:41.540 --> 00:06:44.400
I'm an Azure admin, write me 10 questions on this topic
110
00:06:44.460 --> 00:06:47.240
and it would write questions and we'd say, make 'em expert.
111
00:06:47.240 --> 00:06:50.680
And it really felt like it was more fundamental. And then, uh, we were,
112
00:06:51.040 --> 00:06:54.000
I actually took some of those questions and I put it back into Chat
113
00:06:54.160 --> 00:06:57.960
GPT and I, uh, had the person I was who was working with me, I said, ask it,
114
00:06:58.020 --> 00:07:01.360
ask Chat GPT if it's a good question. And it came back and it says, no,
115
00:07:01.360 --> 00:07:05.360
not really. I'm like, okay. So it's, I'm asking it to create great items for me,
116
00:07:05.380 --> 00:07:08.920
and it's, doesn't even believe they're great items. It can't even lie to me.
117
00:07:09.340 --> 00:07:13.200
So yeah, we're starting with the low stake stuff, uh, and gonna learn from that.
118
00:07:14.240 --> 00:07:16.800
Excellent. Thanks. You know what I think it, and Bridget brought up,
119
00:07:16.800 --> 00:07:19.120
we're gonna launch our first poll right now.
120
00:07:19.580 --> 00:07:23.600
So you'll see a question that pops up the current use of Chat GPT or AI.
121
00:07:24.060 --> 00:07:26.200
I'd like to see where people come into that. And the, uh,
122
00:07:26.200 --> 00:07:30.110
the results are coming in pretty quickly and it looks like a good split.
123
00:07:30.110 --> 00:07:33.790
Bridget, so it's interesting, uh, not a lot of people are currently using,
124
00:07:34.150 --> 00:07:37.510
I don't know if the, if you can see the poll results live or not.
125
00:07:38.960 --> 00:07:43.080
But, uh, there's about, about 50% of the people on here,
126
00:07:43.080 --> 00:07:47.720
over 50% are planning to use it into the future, about 38% no plans. So,
127
00:07:48.640 --> 00:07:52.800
excellent, excellent feedback. So, why don't we jump into the next question.
128
00:07:53.630 --> 00:07:58.370
This one, here's a really good one. Uh, I like this one.
129
00:07:58.370 --> 00:08:02.490
It says, how could, where is I get, get the words,
130
00:08:02.600 --> 00:08:05.170
it's the questions are coming in so quickly I can't keep up with them.
131
00:08:05.520 --> 00:08:09.570
What are the steps of a program that can take to reduce legal risks
132
00:08:09.940 --> 00:08:14.630
associated to the use of generative AI to dev,
133
00:08:14.730 --> 00:08:19.250
uh, to the, uh, test development, uh, test content? How about you, Marc?
134
00:08:19.250 --> 00:08:21.730
Can you take a stab at that one since you're our legal expert?
135
00:08:22.670 --> 00:08:27.330
How much time do we have? Um, no, I'd be,
136
00:08:27.350 --> 00:08:29.970
I'd be glad to answer that question. One,
137
00:08:30.190 --> 00:08:34.130
one question that I would ask in response to that question
138
00:08:35.030 --> 00:08:39.690
is, how many of the programs that say they are not using,
139
00:08:40.470 --> 00:08:44.830
uh, generative AI right now to create test content,
140
00:08:45.740 --> 00:08:50.630
know whether their subject matter experts are before they
141
00:08:50.630 --> 00:08:53.590
show up for your item writing workshops, uh,
142
00:08:53.960 --> 00:08:56.070
going ahead and using it all on their own?
143
00:08:56.810 --> 00:09:01.790
So one thing I would say is you really need to have excellent
144
00:09:01.790 --> 00:09:06.470
agreements that you make sure all of your content developers and subject
145
00:09:06.470 --> 00:09:10.230
matter experts, and I would say that includes employees, contractors,
146
00:09:10.450 --> 00:09:11.283
and volunteers.
147
00:09:11.740 --> 00:09:15.470
Make it really clear that if you're not ready to start using generative AI.
148
00:09:15.610 --> 00:09:20.390
You have to have an explicit prohibition in all of those agreements with all of
149
00:09:20.390 --> 00:09:22.870
your content developers, again, employees,
150
00:09:22.870 --> 00:09:27.750
contractors and volunteers to make sure they're prohibited and they understand
151
00:09:27.750 --> 00:09:32.430
that they're prohibited from using, uh, generative AI to create content. Um,
152
00:09:32.630 --> 00:09:33.430
so that's number one,
153
00:09:33.430 --> 00:09:37.030
because the reality may be that some of you are using it and you don't know it.
154
00:09:37.610 --> 00:09:42.310
Um, and that's a dangerous place to be. Um, one other point I wanna make is,
155
00:09:43.610 --> 00:09:47.270
um, some people out there who are, you know,
156
00:09:47.270 --> 00:09:52.190
early adopters and really jazzed up about Chat GPT and generative AI,
157
00:09:52.660 --> 00:09:57.550
they might, um, mistakenly label the things I'm telling you today, uh,
158
00:09:57.690 --> 00:10:01.550
as FUD, right? Oh, this guy, he is the lawyer.
159
00:10:01.770 --> 00:10:05.470
He is the king of fear, uncertainty, and doubt. And I,
160
00:10:05.550 --> 00:10:09.990
I just want to disabuse everyone of that notion because that's not the case.
161
00:10:11.070 --> 00:10:13.190
I think generative AI is amazing.
162
00:10:13.710 --> 00:10:18.630
I think it offers incredible promise for folks who work in test development.
163
00:10:19.330 --> 00:10:21.910
Um, and you know, we're really just an,
164
00:10:21.910 --> 00:10:26.870
a nascent stage of using this technology. So what do I see my job as?
165
00:10:27.010 --> 00:10:27.843
My role,
166
00:10:28.130 --> 00:10:32.790
my role is to help you understand risk and help you figure out
167
00:10:32.900 --> 00:10:36.230
ways to reduce risk. So if you're going into this,
168
00:10:36.230 --> 00:10:39.030
you're going into this with your eyes wide open,
169
00:10:39.470 --> 00:10:43.750
I don't think it should be a legal decision about whether you use generative AI,
170
00:10:44.290 --> 00:10:46.150
uh, to develop test content.
171
00:10:46.810 --> 00:10:50.910
But I think you're being reckless if you proceed without getting a legal
172
00:10:50.910 --> 00:10:55.710
opinion. Um, and really understanding the risks and at least figuring out, hey,
173
00:10:55.780 --> 00:10:59.830
well is the business case here to do this? Is, you know, sort of,
174
00:10:59.890 --> 00:11:04.710
is it worth the risk? So there are some risks. What are they? Um,
175
00:11:05.010 --> 00:11:08.750
the risks are with your inputs, your prompts, right?
176
00:11:09.210 --> 00:11:13.790
And your outputs. Um, the first one I'm gonna mention is confidentiality.
177
00:11:14.050 --> 00:11:15.270
And you may be surprised,
178
00:11:15.290 --> 00:11:19.270
but why isn't the lawyer talking about copyright right away? I'll get to it.
179
00:11:19.690 --> 00:11:24.670
But confidentiality is a big issue, right? Because in Chat GPT,
180
00:11:24.670 --> 00:11:28.950
which is like this public facing platform, whatever you put in there,
181
00:11:29.140 --> 00:11:34.030
it's not confidential. Okay? So let's be real clear about that. Also,
182
00:11:34.180 --> 00:11:38.030
they're using your training data on Chat. GPT, they forgive me.
183
00:11:38.030 --> 00:11:41.830
They're using your inputs and your prompts as part of the training data.
184
00:11:42.290 --> 00:11:45.870
So that means other users are going to benefit from your inputs and may
185
00:11:45.870 --> 00:11:50.750
ultimately be able to access some part of your inputs and prompts. Okay?
186
00:11:50.980 --> 00:11:54.880
What about the outputs? Same thing on Chat GPT. Um,
187
00:11:54.900 --> 00:11:57.240
the outputs are not confidential, okay?
188
00:11:57.820 --> 00:12:02.240
So if you are trying to create secure test content using Chat GPT,
189
00:12:02.860 --> 00:12:03.320
um,
190
00:12:03.320 --> 00:12:07.080
I think that is probably something you will just wanna be really aware about.
191
00:12:07.260 --> 00:12:09.840
It can't be secure, there's nothing secure about it.
192
00:12:10.380 --> 00:12:14.640
So what can you do to at least address that particular issue?
193
00:12:15.150 --> 00:12:20.000
Well, one thing you can do is use an enterprise level solution that provides
194
00:12:20.000 --> 00:12:24.920
state-of-the-art data security features and includes in the enterprise level
195
00:12:24.920 --> 00:12:26.960
agreement, a confidentiality provision.
196
00:12:27.480 --> 00:12:30.940
And this is not like a far-fetched future state situation.
197
00:12:31.740 --> 00:12:36.660
Microsoft offers GPT 4 through Azure in a
198
00:12:36.660 --> 00:12:37.780
setting, um,
199
00:12:37.880 --> 00:12:42.420
in a very secure setting that has already been declared HIPAA
200
00:12:42.420 --> 00:12:46.800
compliant. Okay? So it can be done.
201
00:12:47.500 --> 00:12:51.880
Um, in addition, um, OpenAI isn't the only game in town.
202
00:12:52.340 --> 00:12:56.120
Um, you know, there are other large language models, um,
203
00:12:56.510 --> 00:13:00.120
including one developed by, uh,
204
00:13:00.310 --> 00:13:05.240
Meta called Llama. Um, there's another model called Alpaca.
205
00:13:05.990 --> 00:13:09.730
Did you know that you can actually, um, these are open source,
206
00:13:10.260 --> 00:13:13.410
large language models that you can install locally.
207
00:13:14.320 --> 00:13:15.370
Like that's huge.
208
00:13:16.150 --> 00:13:20.970
If you can have your own large language model and create a generative AI
209
00:13:21.330 --> 00:13:25.290
platform that no one else has access to. My goodness, that's great!
210
00:13:25.390 --> 00:13:29.450
So that's something you want to think about or consider whether it's, uh,
211
00:13:29.730 --> 00:13:34.410
Microsoft Azure or some other platform, security and confidentiality are key.
212
00:13:34.670 --> 00:13:35.470
And I'm gonna keep,
213
00:13:35.470 --> 00:13:38.770
I'm gonna move this as quickly as I can cause I don't wanna monopolize the time
214
00:13:38.770 --> 00:13:41.210
here, but I want to talk about the copyright issues.
215
00:13:41.670 --> 00:13:44.610
And I wanna point out copyright isn't the be all, end all.
216
00:13:44.940 --> 00:13:48.410
There are other ways you can protect your test content, okay?
217
00:13:48.830 --> 00:13:53.530
One example is using the Defend Trade Secrets Act. Okay?
218
00:13:54.070 --> 00:13:58.930
And in order to have a trade secret, you actually have to take steps to ensure,
219
00:13:59.350 --> 00:14:03.170
uh, that the information is in fact kept secret.
220
00:14:03.750 --> 00:14:07.810
So those first, uh, solutions I talked about, which maintain confidentiality,
221
00:14:07.910 --> 00:14:08.743
that's critical.
222
00:14:09.230 --> 00:14:13.210
Anyone who has access to your test content needs to have a non-disclosure
223
00:14:13.210 --> 00:14:14.043
agreement.
224
00:14:14.310 --> 00:14:18.370
And you don't need to register anything under the Defend Trade Secrets Act.
225
00:14:18.710 --> 00:14:23.160
And you have the same kinds of, um, sort of, uh,
226
00:14:23.310 --> 00:14:27.720
enforcement tools available to you under the Defend Trade Secrets Act that you
227
00:14:27.720 --> 00:14:31.200
have under the Copyright Act, and you don't have to worry about registration.
228
00:14:31.660 --> 00:14:36.120
So I would strongly encourage you to speak to your counsel about the ability to
229
00:14:36.120 --> 00:14:40.760
use a Defend Trade Secrets Act to protect your test content so that you don't
230
00:14:40.760 --> 00:14:44.880
have to worry about this whole copyright bugaboo. But, uh, last but not least,
231
00:14:44.980 --> 00:14:48.920
the copyright bugaboo is a real thing because, uh,
232
00:14:48.940 --> 00:14:53.680
we are in a state of uncertainty about the data
233
00:14:54.020 --> 00:14:58.800
and content that was used to train these models. Um, we know for example,
234
00:14:58.800 --> 00:15:03.800
with Chat GPT, um, the model was, uh, it was scraped from the internet.
235
00:15:03.820 --> 00:15:06.720
All the data was scraped from the internet over a period of a few months.
236
00:15:06.720 --> 00:15:11.160
It ended in September, 2021. That means everything on the internet, you know,
237
00:15:11.160 --> 00:15:13.320
that's a lot of stuff. And guess what?
238
00:15:13.550 --> 00:15:17.880
Much of it is owned by people and companies and copyright protected.
239
00:15:18.420 --> 00:15:23.240
Now, if you are using that data and you create outputs
240
00:15:24.020 --> 00:15:28.280
and you incorporate those outputs into your test, and someone comes along,
241
00:15:28.450 --> 00:15:31.520
maybe it's a competitor in the same certification space,
242
00:15:31.930 --> 00:15:33.760
maybe it's someone who works in education,
243
00:15:33.760 --> 00:15:37.560
there are lots of different kinds of educational tests and assessments, right?
244
00:15:37.900 --> 00:15:40.240
And they come and they look at your content and they say, oh my goodness,
245
00:15:40.540 --> 00:15:43.600
that's pretty much, that's pretty much our content. Oh, and yeah,
246
00:15:43.620 --> 00:15:45.520
our content was out there on the internet. Oh,
247
00:15:45.520 --> 00:15:50.480
you used Chat GPT to create this or fill in the blank large language model.
248
00:15:51.300 --> 00:15:54.720
Um, you may have a problem. Okay, so what is,
249
00:15:54.750 --> 00:15:57.800
what can you do if you want to use these models?
250
00:15:58.220 --> 00:16:02.920
You have to use licensed or public source training data.
251
00:16:03.100 --> 00:16:06.240
Public source means nobody owns it, okay?
252
00:16:06.700 --> 00:16:08.680
So I want to be clear about that. However,
253
00:16:08.690 --> 00:16:12.720
there are models for licensing these data and you know, for example,
254
00:16:12.740 --> 00:16:17.440
if you created a closed system, maybe you're a medical certification program,
255
00:16:18.180 --> 00:16:18.500
um,
256
00:16:18.500 --> 00:16:23.200
you can get licenses to all this medical content and you can train
257
00:16:23.440 --> 00:16:28.360
your model on like actually relevant content that's licensed and
258
00:16:28.360 --> 00:16:31.720
you're allowed to use it. Um, and then whatever comes out, you, you,
259
00:16:31.720 --> 00:16:35.320
you have a high degree of confidence that no one's gonna come at you later and
260
00:16:35.320 --> 00:16:39.280
say, hey, that's mine! So, um, I've been talking for way too long.
261
00:16:39.500 --> 00:16:40.120
I'm gonna stop.
262
00:16:40.120 --> 00:16:43.680
I know there's a lot of questions and I have a lot more to say about it,
263
00:16:43.740 --> 00:16:46.880
but I'm just gonna turn it back to you, Brodie, and, uh,
264
00:16:47.480 --> 00:16:52.000
continue to listen for more questions and see if I can contribute.
265
00:16:52.650 --> 00:16:55.760
Great, great. No, no, it's a great, it's, it's insightful, Marc.
266
00:16:55.840 --> 00:16:59.560
I feel like I just, uh, uh, gained a law degree by listening to you, uh,
267
00:16:59.560 --> 00:17:03.560
in this area. So this is good. Um, no, but I think, Marc,
268
00:17:03.560 --> 00:17:05.920
you're bringing up a lot of things that a lot of people have questions and
269
00:17:05.920 --> 00:17:06.440
concerns about.
270
00:17:06.440 --> 00:17:10.560
And the problem is there's too much and too much stuff for us to get into for
271
00:17:10.560 --> 00:17:14.520
today, but it's just getting them to start thinking about it. Uh, some of the,
272
00:17:14.700 --> 00:17:17.200
one of the next questions that's really that,
273
00:17:17.470 --> 00:17:20.400
that got brought up that it's, there's a,
274
00:17:20.560 --> 00:17:23.480
a theme in a couple of the questions below, but it got a lot of ranks.
275
00:17:23.860 --> 00:17:28.840
How could someone use AI to cheat on a high stakes exam that has a lockdown
276
00:17:28.840 --> 00:17:31.680
browser? How are they gonna get past it and, and so on.
277
00:17:31.680 --> 00:17:34.800
And then there's other questions about can,
278
00:17:34.820 --> 00:17:38.200
you, can Chat GPT predict the,
279
00:17:38.540 --> 00:17:41.600
and there was a separate question for this, predict the, uh,
280
00:17:41.600 --> 00:17:44.760
types of questions that, that are gonna be on the exam. So why don't we start,
281
00:17:44.980 --> 00:17:49.520
uh, since we got their tech guru on Pat, can you take a stab at the first one?
282
00:17:49.580 --> 00:17:54.360
How could someone use Chat AI to cheat on high stake exams even with a
283
00:17:54.640 --> 00:17:55.473
lockdown browser?
284
00:17:56.240 --> 00:18:00.410
Yeah, that actually was a question Liberty asked, uh, just yesterday about this.
285
00:18:00.990 --> 00:18:04.410
Um, and it's an area that its has spent a little bit of time on cause uh,
286
00:18:04.410 --> 00:18:09.370
one of our, one of our focal points is, is the secure browser. Um, we,
287
00:18:09.510 --> 00:18:14.170
we have actually created a tool that that can sit kind of in the background,
288
00:18:14.590 --> 00:18:16.250
um, and, you know,
289
00:18:16.910 --> 00:18:20.770
answer the questions as you navigate through the test automatically. Um,
290
00:18:20.770 --> 00:18:23.370
it's quite frightening when you see it, right? I mean, it actually
291
00:18:23.370 --> 00:18:28.370
takes the multiple choice content, um, takes images, sends 'em off to Chat
292
00:18:28.490 --> 00:18:32.450
GPT asks at the answer, gets back and answer, and then, and then answers it,
293
00:18:32.450 --> 00:18:34.370
and then it'll do essays too. Um,
294
00:18:34.370 --> 00:18:38.970
we haven't done anything with more advanced item types. Um, so really, you know,
295
00:18:38.970 --> 00:18:42.170
the secure browser is kind of your protection point for this, right?
296
00:18:42.170 --> 00:18:45.690
This is the, this is the piece of software that locks down the test and,
297
00:18:46.070 --> 00:18:49.250
and makes it, um, you know, not doing it. Now, to do that,
298
00:18:49.310 --> 00:18:53.730
the secure browser needs to do two things. One of two things really. Um,
299
00:18:55.150 --> 00:18:59.650
the software that that cheats basically does a screen scrape.
300
00:18:59.650 --> 00:19:02.370
It's pulling off all the content that's on the item in the question,
301
00:19:02.750 --> 00:19:07.570
and then it's basically sending that off to the internet to the Chat GPT to do
302
00:19:07.570 --> 00:19:08.150
the answer.
303
00:19:08.150 --> 00:19:12.370
So you can either block it through the internet communication and block it from
304
00:19:12.370 --> 00:19:16.450
communicating to something external or you can block it through, um,
305
00:19:17.000 --> 00:19:20.730
copying, getting it off the screen. Um, the best is both, um,
306
00:19:21.160 --> 00:19:24.490
most people will probably tell you their secure browser does this.
307
00:19:24.720 --> 00:19:27.050
Most people would not be correct, um,
308
00:19:27.160 --> 00:19:30.010
that it is actually possible to screen scrape it. And it's,
309
00:19:30.010 --> 00:19:32.370
it's a lot easier than you might think if you have the right,
310
00:19:32.630 --> 00:19:36.650
the right technology to do it. Um, it's an area that's a, it's a real challenge,
311
00:19:36.990 --> 00:19:41.810
um, to, uh, to deal with. Um, so, you know, it's,
312
00:19:41.810 --> 00:19:44.490
there's a couple things you can do to prevent it. One is, you know,
313
00:19:44.490 --> 00:19:46.450
just making sure that your secure browser and you,
314
00:19:46.550 --> 00:19:49.570
you've tested against certain things and you're, you've got a good,
315
00:19:49.770 --> 00:19:54.010
a good solution there. Um, advanced item types create some challenges for this,
316
00:19:54.010 --> 00:19:56.770
right? So it's, it's, it helps, uh,
317
00:19:56.770 --> 00:19:58.810
if you have an advanced item type that's not multiple choice,
318
00:19:59.030 --> 00:20:02.770
it doesn't fit into the Chat GPT side, though, even in that case,
319
00:20:02.790 --> 00:20:06.930
it can sort of take a prompt and give you some information to, to sort of, uh,
320
00:20:07.190 --> 00:20:10.010
do things. You might think images might stop it too.
321
00:20:10.010 --> 00:20:13.010
Like if I have an image with text on it, that might be a way it doesn't,
322
00:20:13.070 --> 00:20:16.050
you can screen scrape text off the images. It's like it wasn't,
323
00:20:16.070 --> 00:20:17.810
it didn't really matter. So the,
324
00:20:18.050 --> 00:20:21.970
the big thing to take away from this is that this is a new threat in cheating
325
00:20:21.970 --> 00:20:24.970
that we've never seen before. Um, when you had proxy testing,
326
00:20:25.150 --> 00:20:27.250
you basically had to invite somebody else into your life,
327
00:20:27.570 --> 00:20:29.770
somebody on your computer, somebody nasty, right?
328
00:20:30.090 --> 00:20:32.210
Somebody you didn't want to really have a relationship with.
329
00:20:32.600 --> 00:20:35.490
This is writing a piece of software that anybody can write,
330
00:20:35.830 --> 00:20:40.410
run on a machine and basically take a test remotely and cheat and you'll have no
331
00:20:40.410 --> 00:20:43.570
idea they're doing it. Um, and it's, that's pretty frightening.
332
00:20:45.770 --> 00:20:50.760
I, uh, that that is frighten. Liberty, how do you feel about that?
333
00:20:53.610 --> 00:20:54.990
Um, I'm freaking out!
334
00:20:57.720 --> 00:21:01.550
Which is why I asked the question from Pat, uh, Pat yesterday was because I had,
335
00:21:01.810 --> 00:21:06.470
uh, heard that there was a way with APIs that even with browser lock, if you,
336
00:21:06.570 --> 00:21:08.510
if you set up the API and you know,
337
00:21:08.510 --> 00:21:12.630
the time and you can time 'em to launch at certain, there's no way to stop it.
338
00:21:12.650 --> 00:21:16.630
And so you can actually have access to Chat GPT while you're taking the exam
339
00:21:16.650 --> 00:21:20.710
and nobody would even know. And so I do think that, um,
340
00:21:22.510 --> 00:21:25.970
the, the, the thing about proxy testers is you knew you were getting into,
341
00:21:26.280 --> 00:21:27.810
into bed, for lack of a better word,
342
00:21:28.070 --> 00:21:32.290
was someone who was probably gonna steal a lot more than just your, uh,
343
00:21:32.350 --> 00:21:35.610
you know, they probably were not doing it out the kindness of their heart. Um,
344
00:21:35.830 --> 00:21:38.530
so, but with this, right, it's like, it,
345
00:21:38.550 --> 00:21:41.770
it seems safe and innocuous that I'm gonna have a,
346
00:21:41.890 --> 00:21:46.050
a relationship with this technology and if I just have the right software.
347
00:21:46.150 --> 00:21:51.050
So I do think it's something that we have to reimagine not only the
348
00:21:51.050 --> 00:21:52.050
way we think about security,
349
00:21:52.110 --> 00:21:54.850
but I do think to some of the stuff that Pat alluded to,
350
00:21:54.850 --> 00:21:58.730
the way we write our questions and the types of questions we ask, um,
351
00:21:58.870 --> 00:22:01.290
and maybe even reimagine how, like,
352
00:22:01.550 --> 00:22:04.290
if people are going to try to use Chat GPT to cheat,
353
00:22:04.290 --> 00:22:08.770
then how do we maybe reimagine the questions we ask knowing that they're gonna
354
00:22:08.790 --> 00:22:12.810
try to do that. So what do those questions look like? How do we, uh,
355
00:22:12.840 --> 00:22:16.090
make sure that if even if they got access to Chat GPT,
356
00:22:16.230 --> 00:22:18.970
it wouldn't help them in an unfair way?
357
00:22:20.380 --> 00:22:23.240
Thanks, Liberty,. What about you, Bridget? Your hand was raised. Um,
358
00:22:23.270 --> 00:22:26.440
Yeah, I just wanted to add a couple things. First of all, thanks to Pat for,
359
00:22:26.540 --> 00:22:29.080
you know, buzzkill right out the, right out the gates here.
360
00:22:29.300 --> 00:22:33.720
So now everybody's afraid if they weren't already, um, we didn't actually,
361
00:22:34.220 --> 00:22:36.800
um, talk about the other part of the question you asked,
362
00:22:36.800 --> 00:22:39.520
which somebody had asked about sort of an can Chat.
363
00:22:39.640 --> 00:22:43.040
GPT anticipate the next question's coming. Uh,
364
00:22:43.140 --> 00:22:45.040
I'm not gonna answer that as yes and no, yes or no.
365
00:22:45.040 --> 00:22:48.400
Cause that'd be foolish cause I could be proved wrong instantaneously, I'm sure.
366
00:22:48.940 --> 00:22:52.080
But I would just point whoever asked that back to what Marc said,
367
00:22:52.580 --> 00:22:56.640
you had to pay attention to, how did you actually generate these items, right?
368
00:22:56.640 --> 00:23:01.520
Were you using an open source local install install scenario where
369
00:23:01.520 --> 00:23:06.320
you had everything just in its own little shell? Or where you an open, you know,
370
00:23:06.500 --> 00:23:09.040
source where everybody else's data is there?
371
00:23:09.180 --> 00:23:14.120
So I would just caution folks to recognize the more you put your item data out
372
00:23:14.120 --> 00:23:18.280
there for others to see, the more likely bad things are gonna happen. And I,
373
00:23:18.280 --> 00:23:21.560
and I think from a security standpoint, we do, we do to what Liberty said,
374
00:23:21.900 --> 00:23:26.040
we just have to acknowledge this is another security element that we have to
375
00:23:26.040 --> 00:23:29.520
address. It's always the whack-a-mole game as it relates to security,
376
00:23:29.810 --> 00:23:33.800
which is not a technical term. So I think that resonates with everybody,
377
00:23:33.860 --> 00:23:37.080
but making sure that you're constantly thinking about how do you make it harder
378
00:23:37.080 --> 00:23:39.600
for folks who we know are gonna try to cheat. I,
379
00:23:39.600 --> 00:23:43.520
I think what Liberty was getting at though that makes more people nervous is you
380
00:23:43.520 --> 00:23:47.560
used to have to be a really bad actor to cheat, um, every day
381
00:23:47.630 --> 00:23:51.480
Joe's and Jane's may not recognize their use and leverage of Chat
382
00:23:51.640 --> 00:23:53.560
GPT is just as dangerous and bad.
383
00:23:55.340 --> 00:23:56.173
Go ahead, Marc.
384
00:23:58.090 --> 00:24:02.410
I just wanted to follow on with, um, a thought about this again.
385
00:24:03.030 --> 00:24:07.730
Um, you know, not all, I understand, test prep, uh,
386
00:24:07.790 --> 00:24:12.490
can be done ethically, and it is in many instances. Um,
387
00:24:12.590 --> 00:24:16.370
But now every lawyer's favorite word, however, uh,
388
00:24:16.490 --> 00:24:21.130
I do think this is the dawning of a new era for, um,
389
00:24:21.310 --> 00:24:22.120
you know,
390
00:24:22.120 --> 00:24:26.850
unethical test prep because apart from whether someone is savvy
391
00:24:26.850 --> 00:24:31.250
enough to, you know, get the GPT 4 plugin, you know,
392
00:24:31.270 --> 00:24:35.930
to get real time answers to questions while taking the test, um,
393
00:24:36.110 --> 00:24:40.930
the, you know, the other piece to this is, uh, on prep. I mean,
394
00:24:41.230 --> 00:24:45.530
you know, some organizations have really detailed test blueprints
395
00:24:46.430 --> 00:24:50.290
and it's, I gotta tell you, if you haven't done this, um,
396
00:24:50.490 --> 00:24:54.530
I highly recommend you do it in a secure, uh, environment.
397
00:24:55.750 --> 00:25:00.650
But go ahead and take a few of the sections of your test blueprint and give
398
00:25:00.650 --> 00:25:05.650
prompts to Chat GPT and ask it to write items in the format
399
00:25:05.800 --> 00:25:10.770
that you use them. Um, you know, several items for each, you know,
400
00:25:10.800 --> 00:25:13.410
area of your blueprint and see how it does.
401
00:25:14.130 --> 00:25:17.730
I can tell you that you, you know, these may not be items that are gonna,
402
00:25:17.750 --> 00:25:19.170
you know, do real well in review.
403
00:25:19.630 --> 00:25:24.290
But it's pretty frightening from a test prep standpoint about how
404
00:25:24.320 --> 00:25:26.050
effective, uh,
405
00:25:26.200 --> 00:25:31.130
Chat GPT is at predicting the exactly the kinds of questions you want to
406
00:25:31.190 --> 00:25:35.850
ask about, um, you know, your subject areas and standards. Um,
407
00:25:35.990 --> 00:25:39.930
so that's something that other organizations are going to do,
408
00:25:39.930 --> 00:25:42.890
whether you do it or not. Um, so, you know,
409
00:25:42.890 --> 00:25:45.010
in a lot of ways it doesn't yes,
410
00:25:45.010 --> 00:25:49.010
it matters if you use generative AI to create your own content.
411
00:25:49.230 --> 00:25:52.090
And there are really better ways to do things,
412
00:25:52.090 --> 00:25:53.610
and that's what we're talking about here.
413
00:25:53.990 --> 00:25:56.090
But whether you choose to do that or not,
414
00:25:56.230 --> 00:25:59.370
the world around you doesn't care what you do.
415
00:25:59.400 --> 00:26:03.610
They're going to do everything that they need to do to get an advantage for
416
00:26:03.850 --> 00:26:08.770
whatever their objectives may be. So you gotta be thinking about, you know,
417
00:26:08.950 --> 00:26:12.930
how generative AI is changing what you do, whether you like it or not,
418
00:26:14.470 --> 00:26:15.840
Race the change. Go ahead Pat.
419
00:26:17.250 --> 00:26:20.190
Uh, you know, I, I was talking about sort of the negative, but I'm,
420
00:26:20.190 --> 00:26:22.750
I'm a big believer in AI and assessment in general,
421
00:26:22.750 --> 00:26:26.070
and I thought it would be worth just mentioning there's more to stuff than just
422
00:26:26.070 --> 00:26:28.590
generating items, right? Um, there's other,
423
00:26:28.590 --> 00:26:30.990
there's other things that are happening that are kind of exciting.
424
00:26:31.300 --> 00:26:32.710
Some we've been seeing for a while,
425
00:26:32.730 --> 00:26:36.550
remote proctoring have been using AI to sort of detect, um,
426
00:26:36.580 --> 00:26:38.430
like for proctors and assisted mode,
427
00:26:38.460 --> 00:26:41.070
sort of let them know something's happening. And that, you know,
428
00:26:41.070 --> 00:26:45.440
that's been kind of useful. Um, generating rationales and learning content,
429
00:26:45.700 --> 00:26:49.200
you know, uh, that goes with your, your content might be an a,
430
00:26:49.240 --> 00:26:51.800
a safer thing than high stakes content. Um,
431
00:26:51.800 --> 00:26:55.680
And that's something a lot of programs do. We have a, um, um,
432
00:26:55.880 --> 00:26:58.320
a problem with programs like Microsoft that,
433
00:26:58.320 --> 00:27:02.880
that get just an enormous amount of item feedback, right?
434
00:27:03.220 --> 00:27:07.640
So they get item comments that just sort of overwhelm their item writers for
435
00:27:07.640 --> 00:27:08.140
dealing with.
436
00:27:08.140 --> 00:27:11.720
And one of the things we're doing is gonna be using the generative AI to
437
00:27:11.720 --> 00:27:15.400
basically analyze all the item comments and give 'em a summary of the issues of
438
00:27:15.400 --> 00:27:18.760
an item so they don't have to go read through 500 item comments,
439
00:27:18.980 --> 00:27:21.880
and rather they get an, an instant sort of feedback of,
440
00:27:21.900 --> 00:27:25.960
of one item content. And things like this are just gonna make our lives better in
441
00:27:25.960 --> 00:27:28.160
the process, right? So, um, you know,
442
00:27:28.180 --> 00:27:31.120
I'm a big believer that a lot of the issues with generating items will get
443
00:27:31.300 --> 00:27:35.600
solved over time with licensed content and, and, and, and sort of secure models.
444
00:27:36.100 --> 00:27:37.240
But, but in the meantime,
445
00:27:37.340 --> 00:27:40.000
all sorts of interesting things are happening that are positive.
446
00:27:40.060 --> 00:27:43.760
So I just wanna kind of throw that out there. Great. Thanks, Pat.
447
00:27:43.850 --> 00:27:46.240
We're gonna shift gears a little. I'm gonna take it back.
448
00:27:46.270 --> 00:27:48.480
Andy asked the great question, it's got 11 votes.
449
00:27:48.750 --> 00:27:53.480
What are the key consideration test publishers need to consider when
450
00:27:53.480 --> 00:27:58.480
using generative AI tools to assist in generating content for items, cases,
451
00:27:58.740 --> 00:28:03.680
and rationale rationales as it relates to content ownership and usage
452
00:28:03.900 --> 00:28:07.560
in exam programs? Bridget, and I'm putting you on the spot,
453
00:28:07.580 --> 00:28:09.920
but I think this will be a great one for you to kick off.
454
00:28:10.350 --> 00:28:12.840
Sure. Uh, I, there's lots to consider.
455
00:28:13.020 --> 00:28:16.320
So I'll tell you the first three that popped into my brain. Um,
456
00:28:16.540 --> 00:28:20.240
the first one is, what is your corporate policy? I think, mark,
457
00:28:20.240 --> 00:28:22.720
you might have touched on this earlier and, and Liberty,
458
00:28:22.780 --> 00:28:27.160
you were very clear that Microsoft has a policy. Um, we also have a policy.
459
00:28:27.420 --> 00:28:30.480
So from a, um, organization standpoint,
460
00:28:31.220 --> 00:28:35.640
no one in your test publishing world or any world should be actually utilizing
461
00:28:35.640 --> 00:28:40.360
it for work activities until as a corporation, as an organization, you've said,
462
00:28:40.620 --> 00:28:44.240
here's our policy. So here are the, the parameters, right?
463
00:28:44.260 --> 00:28:47.640
If you're thinking about a road you're dragging down, there better be some, um,
464
00:28:48.050 --> 00:28:51.200
rails to help keep people in line. It's really, really important.
465
00:28:51.420 --> 00:28:55.360
You then need to translate that into how do you define that for your subject
466
00:28:55.360 --> 00:28:59.240
matter experts? So now you've let other folks, um, Marc covered this.
467
00:28:59.270 --> 00:29:01.320
What are your policies that you have with those folks?
468
00:29:01.740 --> 00:29:02.960
But it's not just policies.
469
00:29:03.000 --> 00:29:06.200
I think you need to give them clear examples because I think we've all,
470
00:29:06.440 --> 00:29:09.600
I shouldn't say we've, all those of you have gotten to Chat GPT, you're like,
471
00:29:09.740 --> 00:29:12.880
oh, I'm gonna ask it this. And then you're like, well, that's kind of cool.
472
00:29:12.900 --> 00:29:14.200
And then the next thing you know,
473
00:29:14.670 --> 00:29:17.160
it's like TikTok and you've been on it for 30 minutes and you don't even know
474
00:29:17.160 --> 00:29:17.993
where your time went.
475
00:29:18.440 --> 00:29:22.120
I think your subject matter experts are going to get a little excited and maybe
476
00:29:22.140 --> 00:29:27.080
go places you didn't anticipate. So I, I would recommend parameters,
477
00:29:27.100 --> 00:29:28.320
but then having some check-ins,
478
00:29:28.320 --> 00:29:30.680
some re-check ins and monitoring kind of what they're doing,
479
00:29:31.320 --> 00:29:34.560
I think it's super important. Uh, and then also hugely important,
480
00:29:34.560 --> 00:29:38.920
especially right now, where is your human factor in that entire process?
481
00:29:39.700 --> 00:29:43.080
Um, so Marc brought up that poor lawyer who's been in, you know,
482
00:29:43.180 --> 00:29:46.720
in business for 30 years, got got in trouble for, he, uh,
483
00:29:46.720 --> 00:29:50.680
used Chat GPT to help, and some of his sources weren't even legit. Um,
484
00:29:50.940 --> 00:29:54.400
so if you're gonna, if you're gonna create it items with it,
485
00:29:54.750 --> 00:29:57.680
where's your human factor? What is the role of it? Um,
486
00:29:57.680 --> 00:30:01.720
what kind of subject matter experts in are in that venue, you know,
487
00:30:01.830 --> 00:30:04.320
what does that look like? You kinda have to redefine your process almost.
488
00:30:06.960 --> 00:30:09.830
Great, thank you. Any anyone else wanna weigh in?
489
00:30:10.430 --> 00:30:14.070
I actually, I do. So I, I,
490
00:30:14.150 --> 00:30:18.420
I wanna hit on the very last bit of what Bridget said because, uh,
491
00:30:18.420 --> 00:30:22.740
generative AI is a storyteller, not a truthteller, right? I heard some,
492
00:30:22.820 --> 00:30:25.420
I heard somebody early on when we started talking about this, say this,
493
00:30:25.420 --> 00:30:28.460
and it's just really resonated with me. And, um,
494
00:30:28.780 --> 00:30:32.620
I think that what this means is our SME you have to have SME review for high
495
00:30:32.620 --> 00:30:34.940
stakes content. You're gonna have to have SME review,
496
00:30:34.940 --> 00:30:39.460
because right now it's making stuff up and it sounds great.
497
00:30:39.570 --> 00:30:42.180
Like when I just read it, I'm like, wow, this sounds really great.
498
00:30:42.200 --> 00:30:44.340
And then you just start thinking about what it says and it's like,
499
00:30:44.400 --> 00:30:47.380
but it doesn't make any sense, or it's not real and it's, or it's not true.
500
00:30:48.200 --> 00:30:53.180
And so for, I think this means our SME's have to take a, have to be different.
501
00:30:53.180 --> 00:30:57.900
They actually might have to be super SMEs to recognize when something is not
502
00:30:58.340 --> 00:31:01.740
actually true. When I, we first started playing around with this,
503
00:31:01.760 --> 00:31:06.420
it was making up names of features and functions in Azure, for example.
504
00:31:06.960 --> 00:31:08.740
Now this was, this was really before,
505
00:31:08.810 --> 00:31:12.660
this was kind of really early in thinking about using AI to create content.
506
00:31:13.280 --> 00:31:15.260
And my SMEs were like, is that real?
507
00:31:15.260 --> 00:31:20.260
Because Microsoft's moving so fast that the SMEs literally thought it
508
00:31:20.260 --> 00:31:21.340
was something that was real.
509
00:31:21.360 --> 00:31:23.540
And they went and did a little research and they're like, oh, wait,
510
00:31:23.560 --> 00:31:26.360
it made it up. So I,
511
00:31:26.500 --> 00:31:29.320
the SME review is going to be different,
512
00:31:29.320 --> 00:31:32.760
and I think it's going to mean our uh, this the, like, what me,
513
00:31:32.760 --> 00:31:36.200
what makes somebody a SME is also different. Uh,
514
00:31:36.380 --> 00:31:39.720
so just kind of keep in mind that with high stake stuff right now,
515
00:31:39.740 --> 00:31:42.320
you can't trust it. You have to like,
516
00:31:42.420 --> 00:31:46.360
use some caution and what that means for your process is it's going to change
517
00:31:46.360 --> 00:31:49.240
and it means your who your SME is probably changes too.
518
00:31:50.230 --> 00:31:54.300
Thanks Liberty. Pat, you raised your hand. Uh,
519
00:31:54.690 --> 00:31:58.620
Yeah. Um, uh, just to build on what Liberty was saying, it's a, um,
520
00:31:58.760 --> 00:32:02.300
an anecdote from when we were kind of early in the process and we had it
521
00:32:02.500 --> 00:32:06.300
generate some questions, some technology questions, actually great questions.
522
00:32:06.300 --> 00:32:08.340
In this particular case, the questions weren't made up,
523
00:32:08.600 --> 00:32:12.420
but we also asked it to give us a source for the questions and the source,
524
00:32:12.520 --> 00:32:14.420
it actually generated a nice URL.
525
00:32:14.420 --> 00:32:16.820
And the URL showed us exactly where it had come from,
526
00:32:17.120 --> 00:32:21.220
except it was completely fake. Um, the, the I mean,
527
00:32:21.270 --> 00:32:24.020
completely fake. And, and, and that comes down to sort of,
528
00:32:24.020 --> 00:32:27.540
if you're using Chat GPT, you can't trust what it tells you.
529
00:32:28.080 --> 00:32:30.900
You don't know where it came from, and if it tells you where it came from,
530
00:32:30.900 --> 00:32:33.980
you can't trust that either. Um, so, you know,
531
00:32:33.980 --> 00:32:37.620
when you talk about using this for high stakes, if you're in a private model,
532
00:32:38.160 --> 00:32:41.330
um, that, you know, was touching on, that's one thing,
533
00:32:41.390 --> 00:32:43.170
you're outside of that model and, you know,
534
00:32:43.170 --> 00:32:45.850
it's insane on a high stakes program, quite honestly.
535
00:32:47.140 --> 00:32:49.330
Great, great feedback. Go ahead, Marc.
536
00:32:50.680 --> 00:32:55.260
Oh, thanks. Yeah, I, I, I have to agree by the way, um, because I use,
537
00:32:55.640 --> 00:32:57.740
um, Chat GPT all the time. Of course,
538
00:32:57.780 --> 00:33:02.540
I never input privilege or confidential information or share anything
539
00:33:02.640 --> 00:33:07.220
of a sensitive nature in the, you know, subscription that I have to it.
540
00:33:07.760 --> 00:33:12.300
Um, and I also don't save my prompts or anything like that. But, uh,
541
00:33:12.420 --> 00:33:15.580
I will say that my experience with Chat GPT is,
542
00:33:15.580 --> 00:33:20.460
it has the worst case of the Dunning Kruger effect that I've ever seen in
543
00:33:20.460 --> 00:33:25.260
my life. So yeah, you can't rely on it. It is so convincing.
544
00:33:25.260 --> 00:33:27.740
You're like, oh my god, yeah, yeah, that makes sense. And then you look it up,
545
00:33:27.740 --> 00:33:31.300
you're like, th this is, it's just making it up. So it,
546
00:33:31.300 --> 00:33:33.260
it's pretty remarkable. Um,
547
00:33:33.600 --> 00:33:37.660
and I also wanted to comment on this question because there was a question about
548
00:33:37.660 --> 00:33:40.540
content ownership, right? Um,
549
00:33:40.840 --> 00:33:45.070
and so you can own your outputs,
550
00:33:45.250 --> 00:33:49.550
but you know that that's sort of a loaded term because the way people would've,
551
00:33:49.940 --> 00:33:52.790
what, what, what people would've thought that meant, um,
552
00:33:52.790 --> 00:33:57.590
in the world before generative AI is, okay, this is original content. I,
553
00:33:57.590 --> 00:34:01.430
if I want, I can go get a copyright registration in my original content,
554
00:34:01.950 --> 00:34:04.550
I can license it to others, I can exploit it.
555
00:34:04.850 --> 00:34:07.030
That's usually what ownership means, right?
556
00:34:07.350 --> 00:34:10.510
I don't think it means quite the same thing here. Uh, um,
557
00:34:10.930 --> 00:34:15.470
the case law is quite clear in the copyright office has made it clear that if
558
00:34:15.470 --> 00:34:20.110
you have, uh, uh, content generated by generative AI, first of all,
559
00:34:20.130 --> 00:34:24.990
if you seek a copyright registration, you have to disclose any, uh,
560
00:34:24.990 --> 00:34:29.710
component of your content that was created by generative AI. And I will say,
561
00:34:29.810 --> 00:34:34.630
if what you're asking for is a copyright registration in content that
562
00:34:34.630 --> 00:34:37.910
is strictly created by generative AI, don't waste your time.
563
00:34:38.260 --> 00:34:40.390
It's strictly prohibited. Uh,
564
00:34:40.660 --> 00:34:45.510
copyright registrations can only be issued for content created by human beings.
565
00:34:45.690 --> 00:34:50.590
And that's been the law for quite a long time. Um, and nothing's changed yet.
566
00:34:51.370 --> 00:34:56.270
Um, now, what the copyright office has made clear in, in guidance is,
567
00:34:56.930 --> 00:34:57.470
um, you know,
568
00:34:57.470 --> 00:35:02.110
it's possible that the selection and arrangement if
569
00:35:02.110 --> 00:35:05.070
sufficiently creative of, uh,
570
00:35:05.070 --> 00:35:09.750
content created by generative AI may be capable of copy
571
00:35:09.950 --> 00:35:13.670
registration. Well, that's really interesting. Um, you know,
572
00:35:13.700 --> 00:35:17.550
it's sort of an esoteric point though, because you don't, I wanna be clear,
573
00:35:17.690 --> 00:35:22.070
if you go for, like, say you create a test form and there's 200 items in it,
574
00:35:22.490 --> 00:35:24.830
and you're submitting that for registration,
575
00:35:25.050 --> 00:35:29.310
and it's all generative AI created content, uh, but you're saying No, no,
576
00:35:29.310 --> 00:35:29.830
but it's, uh,
577
00:35:29.830 --> 00:35:33.310
we're gonna get the registration and the selection and the arrangement, great.
578
00:35:33.490 --> 00:35:36.110
But do you realize that that means that the content itself,
579
00:35:36.130 --> 00:35:39.560
if someone goes in and grabs five of your items and copies them verbatim,
580
00:35:39.980 --> 00:35:43.000
you can do literally nothing about it, okay?
581
00:35:43.000 --> 00:35:47.240
Because the registration only applies to the selection and arrangement of the
582
00:35:47.240 --> 00:35:52.240
content. Now, if someone took your form and copied it in its existing layout,
583
00:35:52.910 --> 00:35:54.840
okay, then, um, you know,
584
00:35:54.900 --> 00:35:58.920
you might be able to sue them for copyright infringement, but that's not really,
585
00:35:58.970 --> 00:35:59.620
those are,
586
00:35:59.620 --> 00:36:02.920
that's not the kind of infringement that occurs in my experience in the testing
587
00:36:02.930 --> 00:36:07.200
world. People don't really care about your selection or arrangement. Um,
588
00:36:07.700 --> 00:36:12.000
bad guys, um, bad people, uh, cheaters, fraudsters,
589
00:36:12.110 --> 00:36:15.880
they just want your content. And it doesn't frankly matter what order it's in.
590
00:36:16.500 --> 00:36:16.860
Um,
591
00:36:16.860 --> 00:36:21.640
now can you also do anything with the generative generative AI
592
00:36:21.750 --> 00:36:22.720
created content?
593
00:36:23.300 --> 00:36:27.840
You would have to so substantially modify it by human
594
00:36:27.900 --> 00:36:32.320
beings that it basically doesn't look like what came out
595
00:36:32.670 --> 00:36:36.720
from the generative AI platform. So, um, at that point,
596
00:36:36.820 --> 00:36:40.840
you may be able to get a copyright registration in the actual content,
597
00:36:41.420 --> 00:36:43.440
but you'd still have to disclose your process.
598
00:36:44.140 --> 00:36:46.400
So I will say that it is really important.
599
00:36:46.700 --> 00:36:51.320
One risk mitigation approach you can take is really carefully document,
600
00:36:51.660 --> 00:36:53.360
if you're using generative AI,
601
00:36:53.820 --> 00:36:57.960
you have to document every stage of your process very carefully. Um,
602
00:36:58.070 --> 00:37:02.040
because if you ultimately think that what you've created is something wholly
603
00:37:02.800 --> 00:37:05.800
original and new, but it started with generative AI content,
604
00:37:05.820 --> 00:37:10.640
you're going to need to show the evolution of that content to be able to make a
605
00:37:10.670 --> 00:37:15.480
case for getting a copyright registration, if that is important to you.
606
00:37:15.940 --> 00:37:20.240
Now, copyright registration isn't the be all and end all. You can still, um,
607
00:37:20.300 --> 00:37:23.720
you know, do a lot of different things with the content, um,
608
00:37:23.830 --> 00:37:25.600
even if you can't get a registration in it.
609
00:37:25.620 --> 00:37:29.960
But one thing you won't be able to do is sue people for infringement if other
610
00:37:29.960 --> 00:37:34.040
people have substantially similar content. I'm gonna stop.
611
00:37:35.200 --> 00:37:39.000
Good, good stuff, Marc. Thank you. I'm going to, uh, shift gears again,
612
00:37:39.000 --> 00:37:41.840
something outside this, because it's got, it's got, uh,
613
00:37:41.840 --> 00:37:43.320
the highest rating at this point.
614
00:37:43.690 --> 00:37:46.960
Apart from generative AI being used for test development,
615
00:37:47.100 --> 00:37:50.800
how have assessment companies been using AI and what benefits have they seen?
616
00:37:51.390 --> 00:37:54.000
Bridget, I'm gonna have you kick this one off, if you don't mind.
617
00:37:54.430 --> 00:37:58.960
Sure. Um, I'm, and if you don't mind, I, I actually think it's important,
618
00:37:59.300 --> 00:38:03.200
um, maybe just not to think about it just as assessment companies. Um,
619
00:38:03.500 --> 00:38:08.120
all of us on this, um, call or maybe our, maybe we have our own business,
620
00:38:08.120 --> 00:38:09.320
maybe we're part of a business,
621
00:38:09.740 --> 00:38:13.680
and so there's different facets that I think you can think about that generative
622
00:38:13.860 --> 00:38:17.680
AI or AI in general can be, um, helpful for specifically though.
623
00:38:17.890 --> 00:38:21.880
Let's talk about assessment stuff first. Um, I think, Pat,
624
00:38:21.880 --> 00:38:23.200
you might have touched on this, uh,
625
00:38:23.330 --> 00:38:25.720
we've got folks that are using it for scoring, right?
626
00:38:25.860 --> 00:38:28.400
If you've got essay scoring going on, you've got some AI,
627
00:38:28.550 --> 00:38:30.320
it's not necessarily generative AI,
628
00:38:30.380 --> 00:38:33.640
but I think the question was about AI in general. Um,
629
00:38:33.720 --> 00:38:37.560
I think the test development is getting so much buzz right now because of the
630
00:38:37.560 --> 00:38:40.240
generative AI, um, activity.
631
00:38:40.460 --> 00:38:45.390
And it changes the dynamics in that particular area, life-changing right?
632
00:38:45.390 --> 00:38:48.390
From where, from where it was. Um, but there's other things, right?
633
00:38:48.410 --> 00:38:52.830
Remote proctoring, um, AI has been useful in that space.
634
00:38:53.370 --> 00:38:56.470
Um, I agree with what Pat said though, along with human.
635
00:38:56.990 --> 00:39:01.630
I think where folks get nervous is if the AI bus is in charge and there's no one
636
00:39:01.630 --> 00:39:02.470
else checking its work.
637
00:39:02.470 --> 00:39:06.630
And I do think that's actually where Chat GPT is actually helping, uh,
638
00:39:06.630 --> 00:39:08.630
I hate to say the hallucinations can be good,
639
00:39:08.810 --> 00:39:13.350
but I think it is causing pause for people and a recognition that,
640
00:39:13.690 --> 00:39:17.870
um, if you're gonna leverage AI in whatever area, where's your human touch?
641
00:39:17.870 --> 00:39:21.450
Where's that human factor that's validating, etcetera. Um,
642
00:39:21.450 --> 00:39:24.970
but the other thing that you should think about as an organization is what kind
643
00:39:24.970 --> 00:39:29.050
of customer service areas do you have as an organization where AI can be useful
644
00:39:29.050 --> 00:39:30.890
to you? Where could it save you time?
645
00:39:31.020 --> 00:39:33.130
Think about all the documentation that's out there.
646
00:39:33.130 --> 00:39:37.250
If you've got call center folks answering things, think about, um, ims,
647
00:39:37.390 --> 00:39:38.770
you know, you've got chatbots.
648
00:39:39.030 --> 00:39:43.450
How do you help your stakeholders get better at their jobs and help your
649
00:39:43.730 --> 00:39:45.330
constituents get answers faster?
650
00:39:45.360 --> 00:39:50.170
Because you're leveraging AI for work that historically has just been this
651
00:39:50.260 --> 00:39:53.410
brunt for, you know, brute force, uh, folks. And I,
652
00:39:53.730 --> 00:39:58.490
I would challenge everybody to not think about it as a generative AI equals test
653
00:39:58.490 --> 00:40:01.170
development challenge. It's more of an AI-
654
00:40:01.510 --> 00:40:06.330
how can I rethink how I do business today and think about functions in your
655
00:40:06.530 --> 00:40:09.410
business, not just I'm creating an exam.
656
00:40:10.170 --> 00:40:14.340
Yeah. Well, no, thank you, Bridget. That's really good, good insight. And Pat,
657
00:40:14.340 --> 00:40:14.740
It's funny,
658
00:40:14.740 --> 00:40:18.900
I was gonna put you on the spot because when we get into new other ways of using
659
00:40:19.440 --> 00:40:23.300
AI and, and so on, like, one of the things I thought was interesting is the,
660
00:40:23.400 --> 00:40:27.380
the creation of images that you can use for things or, or the flow of things.
661
00:40:27.440 --> 00:40:28.940
But Pat, can you just jump in a little?
662
00:40:31.810 --> 00:40:36.240
You're muted. That creation of images is,
663
00:40:36.420 --> 00:40:41.160
is sort of amazing. I, um, we deliver the Adobe program and we are looking,
664
00:40:41.160 --> 00:40:44.680
and they just added a, a new product, a feature called Firefly,
665
00:40:44.680 --> 00:40:49.480
and it generates images based on some prompts. And it's stunning. Literally,
666
00:40:49.480 --> 00:40:50.160
literally, it's,
667
00:40:50.160 --> 00:40:52.720
it's fabulous to look at to see what people are doing out there.
668
00:40:52.900 --> 00:40:55.360
And there's some other companies that, uh, are doing stuff.
669
00:40:55.460 --> 00:40:59.960
But there is a lot of, you know, AI is all over our industry. Um,
670
00:40:59.960 --> 00:41:04.440
Bridget hit, I think, um, most of 'em, I would just add a few more things to it.
671
00:41:05.060 --> 00:41:09.120
Um, the translations is part of AI, that whole thing.
672
00:41:09.120 --> 00:41:13.640
Automated translations is becoming much more common in the IT space especially.
673
00:41:14.060 --> 00:41:17.760
So we're, we're seeing, uh, a lot more things there. Um,
674
00:41:18.000 --> 00:41:21.640
I think the image and video generation that we just sort of touched on is, is,
675
00:41:21.740 --> 00:41:25.200
is part of that as well. Um, I mentioned using, uh,
676
00:41:25.420 --> 00:41:29.920
AI to sort of summarize like item comments and candidate feedback and
677
00:41:30.260 --> 00:41:32.960
SME feedback. There's some interesting work being done by a couple,
678
00:41:32.970 --> 00:41:37.000
couple of companies in those areas. That's kind of fun. Um, and, uh,
679
00:41:37.000 --> 00:41:40.760
Bridget mentioned the support side and chatbots is another one we're seeing a
680
00:41:40.760 --> 00:41:42.640
lot of, um, that are kind of out there.
681
00:41:42.780 --> 00:41:45.880
So another one that's been around for a long time that I would kind of put in
682
00:41:45.880 --> 00:41:48.160
the AI machine learning space, but it's, it's,
683
00:41:48.160 --> 00:41:52.120
maybe people don't really think of it in that kind of space, but would be the,
684
00:41:52.180 --> 00:41:52.580
um,
685
00:41:52.580 --> 00:41:56.880
the actual test generation where it considers all the constraints of a test and
686
00:41:56.990 --> 00:42:00.320
figures out how to put together a test and, you know, sort of, uh,
687
00:42:00.390 --> 00:42:03.920
that sort of side is just a, a variation that we've had for a long time. But,
688
00:42:04.260 --> 00:42:07.560
um, part of it, so this machine learning, uh, AI stuff is,
689
00:42:07.580 --> 00:42:11.680
is a big part of it and generative is just sort of this new piece that's,
690
00:42:11.680 --> 00:42:13.720
that's really coming out in a big way.
691
00:42:14.970 --> 00:42:18.800
Great. Thanks Pat. I'm gonna shift gears again and, uh,
692
00:42:18.850 --> 00:42:21.120
Sharelle asks a great question and it really,
693
00:42:21.240 --> 00:42:24.840
I think it's a multiple level question, even though she starts on part of it,
694
00:42:25.660 --> 00:42:29.720
it, she said, when trans uh, translation costs or high,
695
00:42:30.030 --> 00:42:34.280
what level of risk do we have in putting them into the banks to trans
696
00:42:34.870 --> 00:42:38.440
translation tools? Is it the same risk as generative AI?
697
00:42:39.060 --> 00:42:42.280
And then I think the second part of that, that question is,
698
00:42:42.280 --> 00:42:44.000
when we talk about cost or high,
699
00:42:44.540 --> 00:42:48.960
are we saving money by using this type of stuff? How does the, where's the,
700
00:42:49.100 --> 00:42:53.320
the pros and the cons come from that? So Liberty, why don't you start off on,
701
00:42:53.700 --> 00:42:55.320
on, on this one.
702
00:42:56.410 --> 00:42:59.870
So, I, I think it's, I think the tools are different.
703
00:42:59.890 --> 00:43:04.110
So this isn't a generative AI solution tool. We're talking about, uh, tools, uh,
704
00:43:04.180 --> 00:43:08.390
more of an AI tool, right? A translation tool. Um,
705
00:43:08.690 --> 00:43:10.550
and if you look at how Microsoft does it,
706
00:43:10.550 --> 00:43:15.270
what Microsoft does is we actually put everything into what we call translation
707
00:43:15.270 --> 00:43:19.030
memory. And so when we, as we translate words, we,
708
00:43:19.370 --> 00:43:22.710
we have a memory bank, so we don't have to retranslate that word every time.
709
00:43:22.810 --> 00:43:27.790
And so what ends up happening is we build up this massive, uh, bank of words,
710
00:43:27.990 --> 00:43:30.470
millions and millions of words, um,
711
00:43:30.580 --> 00:43:32.870
that then we only have to translate new words.
712
00:43:32.930 --> 00:43:37.190
So we've been doing this for a very long time using the same set of tools. Uh,
713
00:43:37.250 --> 00:43:41.390
and so we don't have to, like every year we're adding just a, like,
714
00:43:41.770 --> 00:43:44.710
not on the realm of millions of words because, uh,
715
00:43:44.930 --> 00:43:48.110
but we're adding like more like thousands of words cause we have such a big bank.
716
00:43:48.570 --> 00:43:52.310
But I haven't ever worried about the security of our tools for translation
717
00:43:52.310 --> 00:43:56.430
because it's a, it's a completely different process and it's not really
718
00:43:56.540 --> 00:44:00.630
exposed in the same way when you talk about generative AI and the models that
719
00:44:00.630 --> 00:44:04.260
are used to train that. Maybe I should?
720
00:44:05.390 --> 00:44:06.300
Don't freak me out!
721
00:44:09.000 --> 00:44:11.900
No, no, you're right. It's not, it's safe. Um,
722
00:44:11.930 --> 00:44:15.260
they actually have licenses that say stuff like that, that they don't keep it.
723
00:44:15.530 --> 00:44:17.780
It's not a large language model like these other things.
724
00:44:19.040 --> 00:44:21.590
Great. Uh, Marc you wanted to chime in?
725
00:44:22.300 --> 00:44:26.150
Just, um, on the legal front, you know, with, uh, translation,
726
00:44:26.350 --> 00:44:30.310
I think you still have to be concerned with bias, right? Um,
727
00:44:31.140 --> 00:44:36.120
and you know, I, there, there may be, depending on the nature of the test,
728
00:44:37.100 --> 00:44:41.320
um, a concern, particularly if you're operating in the EU,
729
00:44:41.860 --> 00:44:45.840
you know, you'd have to disclose that you used, I think, uh,
730
00:44:45.840 --> 00:44:50.760
generative AI to translate or any kind of AI any automated process for
731
00:44:50.870 --> 00:44:54.760
translation. And if decisions are being made, on that, you know,
732
00:44:54.760 --> 00:44:58.200
if there's any kind of concern about bias in, um,
733
00:44:58.340 --> 00:45:03.120
the translation process so that the intended audience, you know,
734
00:45:03.250 --> 00:45:06.280
isn't, you know, um, really, you know,
735
00:45:06.280 --> 00:45:10.600
that there's some kind of quality problem, frankly, with the translation, um,
736
00:45:10.670 --> 00:45:14.880
such that the people who need it are, you know, not, you know, there,
737
00:45:15.060 --> 00:45:19.200
in other words, there would be construct irrelevant factors,
738
00:45:19.780 --> 00:45:24.400
you know, contributing to a test score. Um, and I,
739
00:45:24.440 --> 00:45:28.080
I think we would all agree that's a bad thing. So I do think, you know,
740
00:45:28.100 --> 00:45:32.240
you still have to be concerned about quality control when you're using,
741
00:45:32.540 --> 00:45:32.880
you know,
742
00:45:32.880 --> 00:45:37.600
from a legal standpoint when you're using an AI tool for translation purposes.
743
00:45:38.460 --> 00:45:41.970
Great. Thanks Marc. You know what, I'm gonna start, um, we're cutting-
744
00:45:42.180 --> 00:45:45.730
we've got about 14 minutes left. I'm gonna, we have another poll question.
745
00:45:45.830 --> 00:45:48.570
I'm gonna let it run in the background while we keep jumping to the other
746
00:45:48.850 --> 00:45:51.890
questions cause it's interesting, about legal uncertainty.
747
00:45:51.910 --> 00:45:53.010
So I'm gonna launch that one.
748
00:45:53.210 --> 00:45:57.570
Everyone should see it, jump in. And I'm gonna switch it to another question and,
749
00:45:57.910 --> 00:46:01.410
and hopefully we can start, um, shortening our responses just a little bit.
750
00:46:01.430 --> 00:46:02.770
But Jim, Jim is asking,
751
00:46:03.030 --> 00:46:07.250
how are you handling references when creating items if an AI engine is creating
752
00:46:07.310 --> 00:46:11.090
the items? It's a great question and
753
00:46:12.930 --> 00:46:16.520
I don't know if we have a good answer. What do you think Pat?
754
00:46:18.090 --> 00:46:22.800
Um, bottom line, I, I mentioned this earlier, the references are not reliable.
755
00:46:23.220 --> 00:46:25.280
Um, you, uh, can't count on 'em.
756
00:46:25.540 --> 00:46:27.960
If you have a closed system where you know where they come from,
757
00:46:27.980 --> 00:46:31.120
you can at least know it was restricted. So that's good. Um,
758
00:46:31.120 --> 00:46:35.360
otherwise you pretty much have no idea. Um, there are, you know,
759
00:46:35.840 --> 00:46:38.680
examples out there where people are, you know,
760
00:46:38.950 --> 00:46:42.840
copyrighted content has been proven to be just basically copied and come back,
761
00:46:42.950 --> 00:46:46.480
like especially in code where you ask it to generate code and it's given back
762
00:46:46.490 --> 00:46:50.200
exact examples of real licensed code that's out there. Um,
763
00:46:50.500 --> 00:46:53.360
you have to be super careful. Um, and I,
764
00:46:53.440 --> 00:46:56.200
I think it's impossible actually to be careful enough, quite honestly.
765
00:46:56.660 --> 00:46:58.440
So you don't know where the content came from,
766
00:46:58.580 --> 00:47:00.320
you don't know if you have any rights to use it.
767
00:47:03.600 --> 00:47:06.630
Great. Well I'm gonna share the results of the poll. Hopefully,
768
00:47:06.650 --> 00:47:07.790
can you all see the results?
769
00:47:09.080 --> 00:47:09.890
Yes.
770
00:47:09.890 --> 00:47:12.140
Look at that. We're doing a good job of, um,
771
00:47:12.140 --> 00:47:17.100
people are just still waiting to see which is, which is good.
772
00:47:17.400 --> 00:47:21.180
So let me just see if there's another key question worth jumping into cause we're
773
00:47:21.180 --> 00:47:24.940
running short on time. Um, there's a question,
774
00:47:25.050 --> 00:47:29.500
this is more for you Marc, is what percentage of credentialing organizations,
775
00:47:29.580 --> 00:47:31.360
or actually maybe you Bridget, uh,
776
00:47:31.360 --> 00:47:35.520
what percentage of credentialing organizations are currently copy-
777
00:47:35.670 --> 00:47:40.640
copywriting their items on their exams? How does this relate to copyright,
778
00:47:40.790 --> 00:47:45.760
excluding statements of facts since some argue that an examination is
779
00:47:45.760 --> 00:47:46.593
fact-based?
780
00:47:48.500 --> 00:47:51.980
I wish I knew the answer to that question, so I'm not gonna pretend I do. Uh,
781
00:47:52.020 --> 00:47:55.900
I can tell you the copyright, uh, law office is entirely backed up since COVID.
782
00:47:56.440 --> 00:47:57.050
So, uh,
783
00:47:57.050 --> 00:48:00.780
even if any- somebody is copyrighting you are waiting in a queue that's longer
784
00:48:01.330 --> 00:48:03.340
than, uh, it should be. And actually,
785
00:48:03.440 --> 00:48:06.900
I'm gonna ask a question to Marc when, when we pop over you Marc, what I,
786
00:48:06.980 --> 00:48:11.100
I am curious what that does an organization who has fully, you know,
787
00:48:11.100 --> 00:48:12.460
submitted the copyright information and,
788
00:48:12.460 --> 00:48:15.100
and they're just waiting in queue cause it is so long. Um,
789
00:48:15.380 --> 00:48:18.820
I do think Marc is probably more prepared to talk about the implications of
790
00:48:18.820 --> 00:48:21.820
the copyright he's already touched on a little bit. So, um,
791
00:48:21.890 --> 00:48:23.100
what I will say though is I,
792
00:48:23.380 --> 00:48:28.380
I have seen copyright activity go down for organizations, uh,
793
00:48:28.530 --> 00:48:31.220
from, sadly I've been in the industry for over 20 years.
794
00:48:31.520 --> 00:48:35.820
It is actually declining. Um, and I think the rationale has been,
795
00:48:36.400 --> 00:48:40.300
uh, when you end up finding yourself in a legal, uh, situation,
796
00:48:40.840 --> 00:48:44.060
the copyright wasn't protecting you in the manner you thought it was.
797
00:48:44.640 --> 00:48:47.380
And I think, mark, you talked a little bit about that, right?
798
00:48:47.750 --> 00:48:52.180
About what's actually protected. So I have seen organizations back away, um,
799
00:48:52.180 --> 00:48:54.820
but I, I can't tell you the percentage, honestly. Marc,
800
00:48:54.820 --> 00:48:55.940
if you can help me out on the rest.
801
00:48:58.160 --> 00:49:02.970
Sure. Um, I, and of course I have no way of knowing what the percentage is. Um,
802
00:49:03.590 --> 00:49:07.370
and you know, when, when the pandemic hit, you know,
803
00:49:07.470 --> 00:49:09.450
and they shut down the, uh,
804
00:49:09.450 --> 00:49:13.250
in-person examinations at the copyright office and they actually didn't have a
805
00:49:13.570 --> 00:49:18.530
solution for well over a year, um, to do remote examinations of, uh,
806
00:49:18.590 --> 00:49:22.850
secure test registration applications. Um, I mean,
807
00:49:23.610 --> 00:49:24.770
I, I did,
808
00:49:25.010 --> 00:49:29.170
I published an article on LinkedIn about the value of the Defend Trade Secrets
809
00:49:29.170 --> 00:49:31.850
Act and I mentioned it earlier today. And, um,
810
00:49:32.490 --> 00:49:36.930
I hope if you have a secure high stakes test, if you, you know,
811
00:49:36.930 --> 00:49:38.850
take anything away from what I said today,
812
00:49:39.140 --> 00:49:43.890
write down defend Trade Secrets Act - Ask Lawyer, um,
813
00:49:44.040 --> 00:49:46.570
because honestly, you know,
814
00:49:46.670 --> 00:49:49.210
you have the same kind of remedies available to you.
815
00:49:49.280 --> 00:49:51.810
It's a Federal Trade Secrets Act law,
816
00:49:52.030 --> 00:49:55.370
and you have the same kind of remedies available to you, um,
817
00:49:55.560 --> 00:50:00.010
that are otherwise available only with a registered copyright.
818
00:50:00.590 --> 00:50:05.330
Um, you just have to demonstrate that you use commercially reasonable methods,
819
00:50:05.830 --> 00:50:09.130
um, to protect the confidentiality of your content.
820
00:50:09.630 --> 00:50:12.530
And if your content is, you know, uh,
821
00:50:12.640 --> 00:50:17.170
sort of stripped of its value by losing, uh, its confidentiality,
822
00:50:17.630 --> 00:50:20.970
all that's kind of a secure test, then you,
823
00:50:20.970 --> 00:50:24.010
you probably qualify under the Defend Trade Secrets Act.
824
00:50:24.010 --> 00:50:28.080
And there have been various courts that have held that secure test content with
825
00:50:28.080 --> 00:50:29.320
very minor exceptions.
826
00:50:30.060 --> 00:50:32.720
Courts have held that secure test content is a trade secret.
827
00:50:32.860 --> 00:50:37.360
So I wanna encourage you to really consider your ability to harness that law.
828
00:50:37.580 --> 00:50:40.640
But on the copyright thing, I mean, you know, yes,
829
00:50:40.770 --> 00:50:44.640
facts can't be copyright protected, but hopefully if your question is a,
830
00:50:44.780 --> 00:50:47.720
if your test is a question of facts, um,
831
00:50:47.720 --> 00:50:52.320
there's probably other problems you have besides copyright law, right? So, um,
832
00:50:52.840 --> 00:50:57.200
I would just suggest to you that even simple items, you know,
833
00:50:57.200 --> 00:50:59.360
with a three line, um, you know,
834
00:50:59.590 --> 00:51:02.960
stem and then three distractors and one correct answer, I mean,
835
00:51:02.960 --> 00:51:06.920
unless we're talking about math, that's a different story. Um, if we're,
836
00:51:06.920 --> 00:51:10.760
but we're talking about some like higher level thinking or, uh,
837
00:51:10.790 --> 00:51:12.080
substantive knowledge,
838
00:51:12.660 --> 00:51:16.840
you should be able to get copyright protection for your items. Um,
839
00:51:17.060 --> 00:51:22.040
and I have many clients who have copyright registrations and there are too
840
00:51:22.280 --> 00:51:26.520
many lawsuits to count as far as testing organizations that have used the
841
00:51:26.520 --> 00:51:29.800
copyright law to, uh, go after infringers,
842
00:51:29.840 --> 00:51:33.680
particularly when there's like a commercial element to, um,
843
00:51:33.870 --> 00:51:36.000
what the infringers are doing.
844
00:51:37.150 --> 00:51:41.940
Great. Thanks, Marc. Well, next, next question I wanna jump on- it's,
845
00:51:41.940 --> 00:51:43.740
it's got the highest rating right now. It says,
846
00:51:44.590 --> 00:51:49.270
we've heard a lot of fear and stuff that comes out when new technology rolls
847
00:51:49.270 --> 00:51:51.030
out. And I like this question. It says,
848
00:51:51.030 --> 00:51:55.950
what excites the panel the most about possibilities of AI brings to
849
00:51:55.950 --> 00:51:59.310
the industry? And, you know, Liberty, I'd like you to start it off. Like, what,
850
00:51:59.340 --> 00:52:02.310
what, what are you pumped about? Like, what's gonna get you going about this?
851
00:52:03.980 --> 00:52:06.320
The, it goes back to what I said earlier.
852
00:52:06.760 --> 00:52:10.920
I think we're gonna have to completely reimagine the way we assess people.
853
00:52:11.380 --> 00:52:14.800
And that's kind of been my jam for the last decade,
854
00:52:15.140 --> 00:52:20.040
is trying to force us to think differently in this space. And so I,
855
00:52:20.240 --> 00:52:23.120
I think because we have to recognize people are going,
856
00:52:24.400 --> 00:52:28.660
people don't see using Chat GPT as cheating for the most part.
857
00:52:28.770 --> 00:52:32.540
Like if you look at any, like, if you look at students that are using it,
858
00:52:32.540 --> 00:52:33.620
like some of 'em do,
859
00:52:33.640 --> 00:52:36.340
but for the most part it's just another tool like a calculator.
860
00:52:36.880 --> 00:52:41.700
So if we recognize that people don't really, okay, so don't get me wrong,
861
00:52:41.700 --> 00:52:45.620
there's, original thought is important. I,
862
00:52:45.880 --> 00:52:47.500
so I'm not saying that it's not.
863
00:52:47.520 --> 00:52:50.900
But we have to recognize that people will use Chat GPT,
864
00:52:51.600 --> 00:52:55.620
and so how do we have to reimagine the way we think about the questions we ask,
865
00:52:55.620 --> 00:53:00.380
the way we ask the questions? And I, I think it maybe brings us, it,
866
00:53:00.380 --> 00:53:03.980
it requires us to assess skills at a different level.
867
00:53:04.370 --> 00:53:07.820
It's no longer just like your knowledge, which was where we tend to,
868
00:53:07.820 --> 00:53:11.700
and maybe even some level of problem solving, but it's a, it's a higher level.
869
00:53:12.000 --> 00:53:15.260
If they're going to have a tool that can do the simple stuff and that's Chat GPT
870
00:53:15.340 --> 00:53:18.540
then we have to test them at the level that, like,
871
00:53:18.730 --> 00:53:22.340
they have to think about the world differently to be able to engage with Chat GPT
872
00:53:22.380 --> 00:53:26.280
effectively. And so we have to assess ability to do that.
873
00:53:26.280 --> 00:53:29.160
And that's a whole different thing. It's a whole new ballgame, right?
874
00:53:29.160 --> 00:53:30.480
That's what excites me the most.
875
00:53:31.270 --> 00:53:32.720
Wonderful. What about you, Pat?
876
00:53:34.810 --> 00:53:38.110
Um, you know, I get asked in the company a lot why, um,
877
00:53:38.260 --> 00:53:42.670
I've made such a big commitment to the generative AI in our product lines and
878
00:53:42.670 --> 00:53:45.710
our plans when I've got such doubts about it. And I,
879
00:53:45.830 --> 00:53:49.750
I think it's because two things. First of all, it is a super cool technology.
880
00:53:49.870 --> 00:53:51.990
I mean, if you got any sort of geekiness in you at all,
881
00:53:52.170 --> 00:53:56.030
you've just gotta appreciate it, right? Um, and that, that's, that's part of it.
882
00:53:56.030 --> 00:54:00.110
But the other part is I have a belief that a lot of these things are going to be
883
00:54:00.230 --> 00:54:01.550
straightened out over the next couple years.
884
00:54:01.550 --> 00:54:05.830
And it's important for us to be there so that as people go through this process
885
00:54:05.930 --> 00:54:10.510
and already, um, it's, it's, you know, we have all the pieces in place,
886
00:54:10.510 --> 00:54:13.910
we've matured along with the technology and it's just moving. And it,
887
00:54:13.990 --> 00:54:17.470
I think this is a pivot moment for our industry. It is, uh,
888
00:54:17.540 --> 00:54:21.110
something that is changing everything and it's going to make a positive
889
00:54:21.110 --> 00:54:22.310
difference in the long run.
890
00:54:22.580 --> 00:54:25.310
It's going to be a little bit painful in the short run. Um,
891
00:54:25.450 --> 00:54:30.070
and you've just gotta be cautious. But, but you should be watching it. I mean,
892
00:54:30.070 --> 00:54:31.430
everybody should be watching it.
893
00:54:31.880 --> 00:54:33.670
Great. Thanks Pat. Bridget?
894
00:54:34.640 --> 00:54:38.490
Well, I have no geekiness in me and I, I still think it's cool. Um,
895
00:54:38.870 --> 00:54:42.050
I'm gonna use an analogy, um, like riding a bike.
896
00:54:42.110 --> 00:54:44.090
So you learn to ride a bike and you have training wheels.
897
00:54:44.690 --> 00:54:47.530
I think it's important right now that people keep the training wheels on,
898
00:54:47.840 --> 00:54:50.570
because I think if you are starting out and you take 'em off,
899
00:54:51.730 --> 00:54:53.550
you better have a lot of band-aids. Um,
900
00:54:53.650 --> 00:54:56.470
but think about when you first actually had your first ride and your training,
901
00:54:56.870 --> 00:55:00.190
training wheels were off, and you're like, best thing ever. Right?
902
00:55:00.550 --> 00:55:02.950
I actually think organizations are gonna get to that point, right?
903
00:55:02.980 --> 00:55:06.790
It's going to be the best thing ever provided you put the right things in place
904
00:55:06.890 --> 00:55:10.110
and you didn't just run right off a cliff. So I I love change.
905
00:55:10.260 --> 00:55:11.093
It's gonna be cool!
906
00:55:12.020 --> 00:55:15.410
Right, Marc? I mean, you gotta have an opinion on this. I,
907
00:55:15.550 --> 00:55:19.810
Uh, yeah. I, I do. I have too many opinions, but, um,
908
00:55:20.530 --> 00:55:25.130
I would say there's a lot of promise, and I really think,
909
00:55:25.130 --> 00:55:29.770
particularly in the world of assessment, um, it's about
910
00:55:29.770 --> 00:55:31.530
it's more about learning, I think.
911
00:55:31.730 --> 00:55:36.050
I think there's incredible promise in using generative AI,
912
00:55:36.710 --> 00:55:40.570
um, to actually help individuals learn. I know, you know,
913
00:55:40.810 --> 00:55:43.730
a lot of what we do is we figure out whether people have learned and whether
914
00:55:43.730 --> 00:55:47.130
they've learned the things we need them to learn to meet our standards.
915
00:55:47.510 --> 00:55:51.330
That's assessment, right? But there's so much more, I think,
916
00:55:51.330 --> 00:55:55.650
opportunity and capability for generative AI to, you know,
917
00:55:55.950 --> 00:55:59.130
not just, okay, so here's the area where you need work.
918
00:55:59.270 --> 00:56:03.890
So now l like what are the things that an individual candidate,
919
00:56:04.370 --> 00:56:07.010
examinee student needs to work on?
920
00:56:07.470 --> 00:56:10.410
And having the abil now we're, we're basically,
921
00:56:10.500 --> 00:56:15.250
we're making like private tutors available to the world for next to nothing.
922
00:56:15.990 --> 00:56:20.210
That's amazing. Like, to me, there's an opportunity to really,
923
00:56:20.910 --> 00:56:24.360
um, be more inclusive and equitable in like,
924
00:56:24.360 --> 00:56:29.000
everything when it comes to any area of knowledge that we assess. You know,
925
00:56:29.000 --> 00:56:32.480
we want folks to learn that knowledge. And I,
926
00:56:32.600 --> 00:56:36.360
I just think that there's incredible opportunities with, uh,
927
00:56:36.420 --> 00:56:38.560
any kind of a chatbot type, uh,
928
00:56:38.920 --> 00:56:43.320
platform used properly in conjunction with assessments
929
00:56:43.780 --> 00:56:46.160
to actually ensure, um,
930
00:56:46.350 --> 00:56:51.200
that people are learning the material we want and need them to learn and that
931
00:56:51.200 --> 00:56:56.160
they want and need to learn to move ahead in their education and
932
00:56:56.160 --> 00:56:58.520
in their careers and professions. Um,
933
00:56:58.540 --> 00:57:03.440
so I just think there's tremendous opportunity in a space where like
934
00:57:03.600 --> 00:57:08.040
a lot of the high stakes stuff doesn't matter quite so much. Um,
935
00:57:08.260 --> 00:57:11.440
so that's where I think the greatest opportunity lies.
936
00:57:12.430 --> 00:57:15.100
Great. Thank you. We're pretty much outta time.
937
00:57:15.110 --> 00:57:17.060
There was just one last closing question.
938
00:57:17.080 --> 00:57:19.340
If I could get like a 30 sec- second opinion.
939
00:57:19.800 --> 00:57:24.300
Has your opinions changed about this when you first started hearing about it and
940
00:57:24.300 --> 00:57:29.220
today as it's shifting over 2023 as you've learned more? Just 30 sec answer.
941
00:57:29.240 --> 00:57:30.500
Why don't we start with you, Bridget?
942
00:57:31.280 --> 00:57:31.710
Yeah.
943
00:57:31.710 --> 00:57:35.810
It has 100% changed and I think it will continue to evolve until more structure
944
00:57:35.810 --> 00:57:39.490
gets around it, but it's actually, it's, it's increasing for the better, right?
945
00:57:39.510 --> 00:57:42.890
As far as my opinion. I, I'm getting more excited about it, the more I learn.
946
00:57:44.450 --> 00:57:45.283
Excellent, Pat?
947
00:57:46.060 --> 00:57:49.090
Absolutely. It's changed. Um, cheating, being a big threat,
948
00:57:49.190 --> 00:57:52.690
did not consider that initially. Um, prompt engineering,
949
00:57:52.720 --> 00:57:55.330
sort of this next generation skill, um,
950
00:57:55.470 --> 00:57:59.650
did that sort of like something I've been learning about and just the idea of,
951
00:57:59.650 --> 00:58:02.970
uh, using generative AI in ways that have nothing to do with generating items.
952
00:58:03.710 --> 00:58:04.810
Um, it's been kind of cool.
953
00:58:05.150 --> 00:58:06.770
Thanks. Pat. What about you, Liberty?
954
00:58:08.540 --> 00:58:10.660
I think that I get, I'm more excited about it,
955
00:58:10.760 --> 00:58:13.820
but the prompt engineering thing is really fascinating to me. It's,
956
00:58:14.120 --> 00:58:17.180
that's the art of this and that's what's really cool.
957
00:58:17.180 --> 00:58:19.540
And I think that's really the future of assessment.
958
00:58:20.410 --> 00:58:23.140
Yeah. I mean, especially since you've been working on this stuff for 10 years.
959
00:58:23.340 --> 00:58:26.180
I mean, come on. How about you, Marc?
960
00:58:27.670 --> 00:58:31.050
Oh, definitely. Um, because I see, you know,
961
00:58:31.110 --> 00:58:36.090
the technology evolving so quickly, um, and, you know, the,
962
00:58:36.270 --> 00:58:40.770
the companies and people are in this space recognizing the need, um, for,
963
00:58:41.070 --> 00:58:41.290
uh,
964
00:58:41.290 --> 00:58:46.130
certainty around IP issues and for data security and
965
00:58:46.130 --> 00:58:50.370
confidentiality and doing things very quickly to address those concerns.
966
00:58:51.110 --> 00:58:53.330
Um, and you know, so I,
967
00:58:53.430 --> 00:58:57.810
I'm encouraged and I do think that the technology is gonna be in a much better
968
00:58:57.810 --> 00:58:59.170
place even a year from now.
969
00:58:59.590 --> 00:59:02.810
And some of the concerns that I've mentioned may even be addressed with
970
00:59:03.040 --> 00:59:07.890
potential new laws. So I, I, I think there's a lot of reason for hope and, um,
971
00:59:07.910 --> 00:59:10.530
you know, I'm eager to see how things are going to unfold.
972
00:59:11.730 --> 00:59:15.250
Excellent. Well, thank, thank, thank you four, for joining and,
973
00:59:15.310 --> 00:59:19.370
and sharing your insights. For, for you out there, we are gonna be,
974
00:59:19.370 --> 00:59:21.330
there's gonna be a series of things throughout the, uh,
975
00:59:21.360 --> 00:59:23.130
year where we're gonna talk about this stuff more-
976
00:59:23.150 --> 00:59:25.530
get back together in conferences, more webinars,
977
00:59:25.530 --> 00:59:27.170
we'll see what we can put out there.
978
00:59:27.170 --> 00:59:30.370
But thanks for joining and we'll get the stuff out to everyone post.
979
00:59:30.520 --> 00:59:32.210
Take the survey, have a great day!
980
00:59:32.590 --> 00:59:34.130
Thanks everybody! Thanks Brodie!