Human and Artificial Cognition: What Does It Mean to Learning Designers and Faculty?

Human and Artificial Cognition: What Does It Mean to Learning Designers and Faculty?


So, it’s my privilege to announce the key note speaker, doctor George Siemens. George is a Founding President for the Society for Leaning Analytics Research. He’s advised government agencies in Australia, European Union, Canada and the United States, as well as numerous international universities on digital learning and utilizing learning analytics for assessing and evaluating productivity gains in the education sector and improving learner results. In 2008, he pioneered, I’m just going to say MOOCs, because I don’t have to spell it out for you folks. And, I’m sure many of you have, are well aware of George’s work, early work in that space. He researches, technology, networks, analytics and openness in education. Doctor Siemens is Professor and Executive Director of the Learning Innovation and Networked Knowledge Research Lab at the University of Texas, Arlington, nearby. He co-leads the development of the Center for Change and Complexity in Learning at the University of South Australia. He has delivered keynote addresses in more than 35 countries on the influence of technology and media on education, organizations
and society. His work has been profiled in provincial, national,
and international newspapers, radio, and television. He served as PI Investigator or co-PI
on grants funded by NSF, SSHRC, Intel, Boeing, the Bill and Melinda Gates Foundation and the
Soros Foundation. He’s received numerous awards, including honorary docorates from Universidad de San Martin de Porres and Fraser Valley University for his pioneering work in learning, technology, and networks. He holds an Honorary Professorship with the University of
Edinburgh. So, please join me in welcoming Doctor George
Siemens. (Applause). >>Well, good morning. Thanks for the opportunity to spend part of your day talking about some areas of interest that are influencing a lot of my view on what we need to do with teaching and learning in society in general. I’ll give you a little back drowned on what my
motivation is in this space. I was at Red River College in the late 90s, that’s in Winnepeg, Manitoba. Just basically go straight up north for about 1500 miles and you’re there. We were the first college in the country that went with an exclusive laptop program. What that means, all of our students in our program received a laptop and they started working with technology in ways that we, as either faculty members or as learning designers, had not anticipated. So, at that point, it was a five megabyte pipe into the campus which covered the entire campus. So if you go there early and started downloading movies or something, basically anyone who showed up after you did, would have slow Internet or no Internet in some cases. But what was unique aside from the band width issues at that time, is what happened in the classroom remains an ongoing challenge for me with technology in order to understand. And what happened was the front half of the classroom, namely faculty, would take existing learning material, in this case, it would be overhead slides or projectors. We would go, we literally had a typing pool. And we would give them our slides and they would magically transform them into a thing called .ttp file. Then we could go back into the classroom and teach with an overhead projector rather than your traditional slides. What happened on that end of the room changed everything. It changed how they got a hold of information. It changed what they did in the classroom while I was lecturing. It changed the way that they related to one another. And that’s been a defining issue for me is to understand why does technology have such dramatically uneven impact in practices when you’re involved in an environment where different parties in different roles have ak toes to the same
tool. The reason I start with that as an demonstration. Today I want to look at specifically the ways in which artificial cognition impacts that process. Now you’ve probably heard the some variation of the best time to have started thinking about Online Learning, was about 20 years ago. And the second best time is right now. And Deb and I were just chatting this before, I’m often surprised when I meet with universities who still are not focused on digital learning or who still don’t have a digital learning strategy or who feel by having canvas as an LMS, you’re now learning at the
digital game. I find this mind set interesting, because I know all
of you are well aware of this. There is a range of quality dynamics, there is a range of out standing questions you have to address as leaders in an organization that go well beyond having a
technology infrastructure. So my argument is going to be largely today, there will be parts of the talk where you’ll be thinking, wow, this means absolutely nothing to me. And then there will be other sections where I’m trying to argue then is the time to start thinking about how you’re going to implement concepts of artificial cognitive agents in your teaching and learning practices is today. It’s a challenge to begin preparing for, due the pays and change of some of these technologies being
developed. I won’t try to scare you about how these systems are developing. I think you probably get that from your daily media. But I do want to take you a little walk through the evolution of technology from my perspective as an instructor and as a faculty member. Then I want to start articulating the second part of the talk, what happens with the Human and Artificial Cognition intersection. I try
to stay away from the word artificial intelligence, because it’s a big ugly in some way useless word. It can mean so many different things to people. I try from the prespective of learning designers and faculty to reduce it down to which cognitive tasks are being done by an artificial cognitive agents and should be, and which are or should be done by a human agent. So that’s sort of the perspective that I’ll bring in terms of that articulation. But in that time I’ll talk a little about the bias he is and ethics that we face. I’ll look at the ways that AI systems are exceeding human cognition already and human domains that you may or may not be
aware of. And then I’ll spend time where the design and particularly implement indications from a teaching
perspective. But an underpinning strand should be what does this mean to a learning faculty. A system which is designed for the society what does the evolution of more assistive cognition technologies mean to how we design, to how we measure and assess quality and how we develop our students. So, my stages of technology process look a bit like this. And I’ll go through these fairly quickly. So the pre-web through to AI learning which is roughly where we are. And some of you have walked this journey for 20 plus years as well. And some of you may have stepped into it somewhere in the middle. But regardless it helps to have a little background on how did we get here. So, pre-web, this was basically some equivalent of computer based teaching and training. You would often have a CD-ROM or you might have a laser disk that would be used for training and you would basically give students an opportunity to sit in front of a computer and go through a series of learning activities that weren’t connected to the web. It was not a social experience. And by and large it was individual learning. It was what an individual was doing. Then in the late 90s, early 20,000, we had a shift to the — 2000. We had a shift from the web. The web was significant from a number of fronts. It produced what I described earlier as that uneven impact. It generated an outcome for students that was different than the practices that happened from the faculty end. At this point, we were looking early stages LMSEs, many of you recall if you were around at this stage, having to learn HTML, not that you wanted
to but if you didn’t you would get weird configuration on blog or forum post and everything was done to HTML coding. At that point the main tools would have been web CT blackboard and mood Dell. At this point I think the has market share lead but those were the thing he were working. Blackboard bought mood I will rooms and a couple other tool sets this was about curriculum management. This was still a point where we did what we used to do but just did it with
a new technology. There warrant any dramatic practices and it was basically a transition. We didn’t transform the online experience, we just transferred the classroom environment. And for faculty that had the ability to provide readings, so provide at that point some of you might even recall illuminate which was sort of an early access voice over IP platform where you could share slides. Those were the tools faculty gravitated toward because they presented the familiar dynamics of a classroom. Roughly 2000, the wheels fell off this relationship and we started seeing it, social media. Some of you might recall when we used to call it web 2.0. But it was in many ways, just an extension of what Tim Bernersley initially wanted. And what he initially wanted was a way for people on the web to be able to read and write. So it wasn’t a consumption platform. It was an engagement
platform. And in a number of ways this was sort of the golden age of the web for learning and knowledge development. We’ve since found out that if you give everybody a voice, that’s not necessarily a good thing. And it produces some interesting impacts, culturally and politically. But at this point there was at least the social dynamics were fully brought into learning where suddenly you had control. You could reach out to a faculty member in the Philippines. You could reach out a faculty member in New York. You could engage and create communities of individuals within your particular knowledge domain and you created these very rich network communities if you were engaged. We had mediating technology that were a little more immersive, such as second life, but by and large, it was blogs, Wiki, social book marking and
so on. Now, in the early stages of YouTube, we were then able to go out and actually take videos and reasonably easily put them online. I remember spending hundreds of hours in a TV studio, we received a grant at Red River to do some online development work. And it was the cost of doing video, five person staff including the editing and the recording and we tried streaming online. We had to run the class in the evening. Remember our 5 megabyte band width. So had been run in the evening because of less
students. (indiscernible). Very cost intensive to do. But all of a sudden this
neat thing called YouTube showed up. And it completely changed the cost and ability for us to share our ideas. At that point we were starting to see the growing influence of digital learning in the form of popularization through platforms like MOOCs, MOOC platforms like (indiscernible). These were video based. This is a early stage of platforms coming together. We still had some of the social dimensions of web 2.0 and things become much more media rich and much more engaged. We had a brief cycle which always has been a strand but hasn’t fully blossomed and it may not take to costs. Let’s face it if you’re used to play call of duty at home and you show up on Monday and they say play this game. We didn’t have a million dollar budget, but we paid this student over her summer intern. It’s a great game. You can understand why students might not be horribly engaged in that type of learning experience. It’s a lot of potential opportunity but it’s a strand of influence rather than a prominent thrust in many areas. Now we’re seeing more with a VR, AR dinormics, with auklos, coming, very affordable in their higher domain such as medicine
an others. All right, this is where we are all are. I assume
some of you have a background in this, none of this is new. Now we’re getting at this stage where we’re at this AI influenced stage. This is a stage where we’re really dealing with an awful lot of hype right now. A lot of it simply isn’t true. Some of you may have seen the recent, I think it was economist or one of the platforms had these students in China. They would put on a little head band and was based on EEG waves or EEG analysis and basically if the student was paying attention, it would glow a certain color. If the they were distracted it wouldn’t glow that color, it would glow a different color. And there was an way to automatically email or text the parents that this students weren’t paying attention for this cycle. Now of course this was cast as AI. It was not. It was a very rudimentary two sensor device that was put on a students head. And there is no sense in which this is scientifically useful but it
wasn’t AI. But basically in technology right now, if you don’t know what it is, if it sounds scary or just freaky or weird or creepy, it’s called AI. So that’s currently your general metric. That’s where we’re now. There is so value in the AI conversation in learning. I don’t like particularly the focus it’s going and you’ll see for the rest of the conversation I’m going the try to transition it from less about AI and more about what specifically will a artificial cognitive agent do. More we’ve heard, there is going to be X amount of jobs, 30, 40, 50 percent of jobs that will be automated. Likely what we’ll see is percentage of jobs automated rather than a full job portfolio. What I mean by that, is there will be parts of work that will be automated. You may lose 20% of your existing work due to some level of automation, but not the entire position itself. Now, just before I get there, I want to make two quick points. One is there is a range of related technologies that have developed in the periphery that likely will have some impact over time and continue to remain relevant tools like this, and
that. You’ve always heard of (indiscernible) of us is starting to leave at least the LMS phase. Huge conversation around I had fee profile. I’ll say learning profiles is the single biggest unsolved issue in digital learning right now. Because we’re still treating our students like they’re entities that we don’t really care about a whole bunch. We don’t worry about what you know, we worry about what do I want to teach. And our whole curricular model is structured around what do I want to
teach. Then there is a range of tool sets that target specific learning and increase and personalized
adaptive learning. We’ve seen related technologies and some system item milk address this as well. We’ve seen universities that taken a focused initiative to innovation. (indiscernible) being some of the oh more prominent means. There is smaller departments and smaller initiatives often at a community or state college level as well tapped growth of sort of network process to try to generate some leverage. With the early stages of this it will become a more pronounced shift of great network utilization of resources as we try and deal with the diminishing costs and changing lands escape. Simultaneously, enormous explosion of education startups. To the point where there is something in the range of 3300 organizations listed on the crunch based data set of educational startups. This is for lack of a better word, how many of you have heard of Frankenfish? Basically the pond dries up and it will crew it I’ll walk to the next pond but it’s technically a fish that’s the start up ecosystem in some regards in that it will morph to the circumstances and to the changes that are
occurring. At the same time, we got an explosion of research interest that’s driven by the data if I indication of
higher education. We’re collecting data around mobile use, around engagement in MOOCs. Around the ways you use university resources an so on. So, this treasure trove of data, looks a bit like this. Where we’re getting information about our students related to their use of student information systems, through to their use of any of the LMS platforms. We’re adding to a greater degree and some consternation to privacy advocates instruments that will help us better understand the psychological profile and psychological attributes of our student population. And so more and more basically if you’re breathing and touching technology, you’re being monitored at some level. You’re generating data trails that someone else at some point will try and ask questions of. Now, that’s just a byproduct from the digital environment. That’s how it’s created doesn’t mean it’s the only way and certainly not inevitable but that’s what we have right now. When that comes into the university environment, my early interest in this was when a group of colleagues and I started the learning analytics con convenience or society for analytics research. Because we were trying to understand what does it mean to be involved in research activity when you’re dealing with students, with learner specifically. And our focus was to split it up into two dynamics. We weren’t interested in dynamic analytics. Weapon didn’t to optimize classroom use, how we reduce energy in a university. We were very much interested in what is the relationship between the student, the faculty and the institution. And that, within that try add, it includes details like your curriculum, your content, the learning experience, learning design process and quality issues as well. So, that was our interest within learning analytics to say look we have a ridiculous am of data we’re collecting. This was a byproduct, just looking at my own career. I wrote a paper in connectism, in my own, where I say learning is fundamental Leah fundamental process. In 2008 when Steven downs and I ran an open online — University at Manitoba. The focus was to say that we want to explain to people, what does it mean to learning in network. So put another way connectism generated MOOCs and MOOCs created data analytics. So without going all biblical on you. Learning is the foundation and is getting artificial cognition and artificial learning in the learning process. The question is great, thanks for that George, you got data, you analyze data that produces analysis of students. Why should we care? One of the dynamics I think is important to think about is there is a transition from AI as a tool, or artificial agents as a tool to artificial agents, a quasi-colleague. Now, this is a little bit ahead of time for me saying this. It sounds silly right now. In five years it won’t. When you’re designing learning, for example, we’re currently designing for a human knower. We eventually had a need to design for a nonhuman knower as well. Meaning we designed for cognitive agent as well as human individual in that
process. Yes there is a lot of issues around it. This came from Melanie Mitchell who wrote a terrific book on artificial intelligence. She’s with Santa if he and out in Portland. She quoted her advisor recently when he did a presentation at Google. And he said as he was observing just how far the AI field had developed since what he envisioned early on. He felt, for example, that artificial agents would never be able to do core human creative activities, such as write a symphony, for example. That this was going to, creativity was the domain of human expertise. So, he actually had a group of researchers, they would take a famous composer and they would feed an AI system the sort of work that this composer had produced. And then they would select four people who quasi-experts with that composer, let’s say Chopin or Bach or whoever. They would play the obscure piece of that artist and play the AI produced piece of that artist. They would ask which is obscure versus what is AI generated. The experts generally got it wrong. The pieces that were AI produced sounded so authentically novel as to be within the lineage of what was well known in that composers portfolio and then the obscure piece which lightly was a variation from the traditional profile. They felt that was the AI produced one because it didn’t sound like it, but that was just an obscure piece that people didn’t know. It is important that recognize that AI is built looking backwards. Takes traditional information from a backwards type of landscape. We’re now facing a place where our intelligence our cognition is spaced across a range of robots and (indiscernible) if you will. One. One thing I find whenever Boston Dynamics releases a new video, in this case Atlas is the one you see as a marker. It’s kind of like seeing your kids grow up, instead of staying at a certain stage for a year, it’s like six months later they’re a teenager. Just yesterday they were just but a toddler. Now all of a sudden they’re doing back flips and par core type stuff. So, this development of these systems is exponential, which is why people like Musk and others feel there is an essential risk because it’s not a progression that is linear. It’s not human beings. Every 80 years, we have to reintroduce what we’ve learned into a new receptacle, namely a baby because we die off. Whereas AI only grows more or learns more. There is an ex so essential dynamic. Or Exponential dynamic that forms the basis of questions that Musk is asking. Just I’m going to turn to cognition and not intelligence. Because the granularity has value for people who are designing learning activities or who are planning the development of courses and curriculum. So, what is cognition? Well at its basic view and this is, you know the definition that I used in my cognitive processes course. It’s the sensory processes, mental operations and complex integrated activities that are involved as we interact with information. That information could be information that you’ve acquired visually. Could be information you inquired auditorily. It could be you in your head turning around something that you learned during a panel this morning or this afternoon or a conversation that you had with a colleagues or an email exchange. So, basically a sensory process of things like sensory input, sight, sound, so oh, touch, tactile. Mental operations would be basic things like even
coding things into memory and complex integrated activities would be creative processes. Where you’re creating something new. Basic level that’s cognition. Now, AI, contrasts from that in that it’s looked at as being the science of trying to create technologies that are expired my human intelligence. The goal of AI has long been to try to exceed human intelligence or mimic and map human intelligence. Companies like deep mind which have been purchased or has been purchased by Google, but their statement is or their mission is, we want to solve intelligence and then use intelligence to solve everything else. So once again the typical humbleness that we’ve come to expect from computer scientists. But AI is essentially this idea of getting to the human dynamics of learning and development so we have something that looks humanish in terms how it performs. Worth breaking down three specific dynamics of this. One is and think of them as Russian rest nesting dolls. AI is science of engineering in making intelligent machines. Machine learning is situated in the AI umbrella. Machining is what we experience as Spam on Gmail for example. You might have had a time even ten years ago, email was absolute nightmare. You get up in the morning and yay, I have two hundred of emails and two of them are relevant. So the Spam issue has been addressed from that end. And within, there is supervised and unsupervised learning which requires label data sets or the system discovering pattern from that.
( indiscernible). The head of AI, with Jeffrey Hinton, for their work in deep learning. But there is another subset called self regulated on supervised learning but that’s not as prominent. Then the final one, we have AI, machine learning and then we have within that the final subset is the one that most closely maps or attempts to mimic human intelligence which is neural networks
or deep learning. And the view here, this isn’t surprising for those of you that have done any work in learning research. There has been large swaths of psychology’s history that has treated the human brain as a black box. We don’t know what it’s doing, we don’t understand what’s happening. And in a comparable way, researchers with neural networks are treating it as a black box. Not because they want to, but because they can’t treat it any other way. Meaning that the complexity of what goes on in these very high computation naturally driven systems is such that they don’t understand what it’s doing but they do know the answer is right at the end. So we’re just like, well, that’s good, thanks for that. But there is some dynamics there. So, we start talking about this from a human intelligence end. Well what does that look like? What are the things that these cognitive agents are doing better than human beings are already? Things like diagnosis of cancer tissue for example is being exceeded by AI systems. Things like translation, investment. If you look at, there was a paper in nature recently that looked at what they called micro crashes. Of they had a delightful statement that says these systems move so quickly that there is no place for human intervention in its model anymore because of the pace at which these trades are happening. So, one illustration which kind of gets at it visually a little for me is we have this land scape of human competencies related to cognition. We’ve assessed swaths of that material information to artificial cognitive agents from language to translation to speech recognition to increasingly driving, even some level of social recommendations an so on, purchasing recommendations and so on. So there is ways we’ve already given space to these cognitive
agents. We still have areas where we like to think we’re in charge. The but these are the domains of art, creativity and book writing and science. Even though some of you may have recalled recently a group of researchers took a series of papers in one particular domain and fed them to the AI system and it ended up providing novel scientific discovery on aggregate that hadn’t been noticed by humans that had actually published or written the papers. So, that’s where we are. Landscape complex emerging. We’re acquiescing space to the systems and when we talk learning in the future of knowledge process is the best time to start thinking about what does that look like, would be today. Now, there is some areas in specific points like I mentioned earlier where these systems exceed human cognition. So here’s one illustration from the progress measurement around AI. This is a data set that’s been decree aid a series of images. Millions of images where you train models and then compare them with the red dotted line which is human performance. So the question being asked here, at what level based on correct percentages does an AI system out perform a human being in terms of image recognition? Now you’ll see in the screen around 2015 we broke that barrier between human capability and now we sort of said, fine, you’re just better than we are. So, you know, go be you. Other things like handwriting and digit recognition, that’s more complex. You’ll see agents more of a pattern some of the images I share. But these are examples where the system is delayed because of the complexity of the task. But you’ll often see these staged intervention. It’s kind of like Atlas the robot from Boston Dynamics. It’s a toddler one year and six months later it’s
doing back flips. But in this case, the handwriting is more complex. The varying red dots are all examples of algorithms people have used to match human performance. But we’ve got to the point of matching it, but the sort of the exceeding hasn’t been successful. Because it’s challenge. Handwriting can vary from person to person and the less structure, the more challenge it is. On the other hand, something like Atari, that was pretty straightforward. So the red dotted line is a human being playing Atari 2600. And the blue line that we blue past in 2013 is a machine playing it or different models that play it. So if it’s structured and has clear rules, a system, an automated system can very quickly exceed human level performance. Many of you have probably heard about the alpha goat example. So Chess for example in the late 50s, early 60s people like Herbert Simon saying it will take us no time at all for an AI system to beat human beings at Chess. Now, he was off by, by 40 years because it was in the late 90s, at that point an automated system beat a human being at Chess. But go, magnitude more complicated or chess is was seen as much further out there. Sometimes described as the most complex game that the human mind has created. Now in a number of attempts that human have to defeating them at chess to defeating at go. Things came to head with Lisa Dell playing against Alpha GO and eventually he lost. And from there, which is again the Exponential dynamics, The individuals behind Alpha GO, so deep mind, the Google link. Decided they would take a different approach. I mentioned a lot of AI, not all, but a lot of AI is backward facing. Which means in order to develop a system that can recognize cancerous tissues in an image, it needs to see hundreds of thousand, even millions of vision of cancer
tissue. In enough time, one it has enough time, or unlabeled data, then it builds of a model of what is or isn’t and then it can understand what it is they’re looking at. That’s the same thing that happened with Alpha GO, given like they did in Chess, a range of
options this is how people play the game, these are out come, previous game cards. And when somebody played a certain move in GO. In this case Lisa Sidel says, I’ve seen this before, this is the outcome of what happened. And that was how it was eventually able to beat
them. Alpha goes zero they went out and said, we’re going to give you some general rules. You figure it out. You figure out how the game works. And it did. It played itself millions and millions of times. And over a very short period something in the range of 40 some days, the system that just played itself without any existing rules figured out the game of GO. Then they played Alpha GO zero the system that learned itself against Alpha GO, the system that beat the first human being. Alpha GO zero beat Alpha GO one hundred times to zero. These are the dynamics interesting and intriguing when you’re talking about AI strategies. Basically, like children’s comprehension in a children’s book are in many cases out of scope. Unless you look at the AI work did on the top right-hand side of the corner where they developed sophisticated models to do it. Trying to recognize, there is core basic activities where we’re given up lots of space to artificial cognitive agents. So what does that look like in an integrated way and how does this part of the talk have anything to do with the kind of work that you all do on a daily basis? So, let’s look at two bubbles. One is human cognition and the other is artificial cognition. Now there is a point at which they overlap. They’re happy in our daily lives, in how we do email, they overlap for toes of you on the plane and the system the pilots use to navigate here, there are things automated, there are things that a human needs to do and how they intersects matters. As we get more hand more self driving cars on the road that will continue to be a greater issue. Or as we get more drones in the air delivering packages, how do you coordinate these very difficult systems and get them to communicate. So my interest mere, as I go through, I’m not so interested in the outer edges. So the outer edges of human cognition and the artificial would be related to basic algorithms or the infrastructure for AI platforms. Not while it’s interesting and fascinating I’m not so interested in the outer edges of either of those domains. I’m very interested in the intersection of the two and I’m very interested in what is the role of the university, what is the local of higher education in general to help prepare individuals to engage in a work setting or to engage in a society where these kinds of things are at play. Where we need to understand the learning processes the ways that we engage and the way we make sense of the world around us and how do we use and collect data to try to make those kinds of out comes or processes understandable or comprehensible. So the challenge then, returning it down to the practical faculty and learning design end how do we bring these two together? How do we look at distributive cog any stiff agents where we have both human and technical advantages involved and how do we work together to make them produce a more cognitive impact. Now at this base it’s important to look at, what is it that humans do that might be best scaled by technology. And what is it that technology does that might be best scaled by human beings. I’ll start to make this argument in a more substantive way over the next several slides. Just to reiterate again. The topic, what does AI do particularly well? Well it’s great at a number of things that we as human beings don’t do as well when you have a certain scale of data. When you have a small amount of data. David Gernther from Yale this computer scientist has a statement. If you have three dogs give them a name. But if you have 10,000 head of cattle don’t bother. So the idea is we relate different to quantity. And we get more to that, we hand it off to artificial agents so AI is terrific at recognizing patterns. It’s great at driving cars. It’s great as recognizing faces in a huge number of environments. So that’s a key skill set, but there is an ethics dimension that’s tied to this. This isn’t all sunshine and flowers and happiness. There are things that AI systems do that drive in or start to impact our questions about our role in
the world. I read this paper which was a fascinating paper I might add. An artificial intelligence and he van gel call statement of principles. By the way, you don’t necessarily need to read it, but in the end God wins. Just a heads up. There is a number of initiatives that come from MIT and Stanford in particular. Where they’ve been invested to the range of a billion dollars in helping to drive the AI landscape. China in many ways is positioning itself to become the leader because of the data that they collect. And the ways in which they use that data which we aren’t necessarily engaged in western societies at the same level. But the U S has largely offloaded the AI development landscapes to corporate entities whereas places like (indiscernible) it’s government
driven. But this is one illustration what goes wrong when
you have a million — you have 121 faculty members and for some reason not a single faculty member is black. Which is an important distinction, because the biases of the algorithm are driven by the biases of the people. Not intentionally necessarily. It’s not like gee, today
is a great day to be a racist. It’s today I’m thinking of the world in this way because that’s my perspective and that’s the outcome you get. When you develop these things show, you sometimes you release them be and what’s unique, as I said, they come back to us. Like
these systems reflect what we’ve done, right. So when we find an AI system and somebody says
that AI system is biased or it’s racist. I’m like, well, okay, but let’s not forget where it learned it. It learned it from us. So, one example is Tae. Many of have seen this because it was a delightful example of the joyful arrogance of technos and the lack of understanding of social processes. So Microsoft unleashed a system called Tae. Our AI boy is going object your friends. None of us would be surprised but they had to shut it down in
24 hours because it became a racist Nazi. And the outcome of this that’s worth emphasizing is these systems reveal our biases. They don’t inject new biases. I’m not saying everybody on Twitter is a Nazi, but I’m trying to argue that what these systems learn in large scale dialogue, used loosely is driven by the data and the interactions that happen on social media. So, those are some of the bias issues. But there
is another dynamic which isn’t quite a bias but a little concerning. Because AI optimizes based on what it’s told to do. So here’s an illustration of a I kind of cheats if you will. So in this illustration, AI was used to classify skin lesions and often you would have a ruler beside that lesion. And AI is like, ruler equals cancer. So they draw that correlation even though that’s how we presented it. So, it did the job we asked it to do, but we find and you’ll see this more and more as we go forward. There is a lot of sloppiness in human cognition. We understand each other in very ambiguous ways of dialoguing. Another example of a self driving car that was taught to maintain a certain speed in traffic. Found you know what if I head over to this parking lot, and I do circles, I’m winning, I’m keeping the speed. Another example where a game decided that it was easier if your mandate is don’t lose this game. It was fine, I’ll just pause. Is that what you wanted? And that’s the outcome. So, another example artificial agent that kills itself at the end, for the first level to avoid losing when it gets into the second level. So, I just want to articulate there is clear bias and related issues that are at play. Now, from our end, we’re the implications of this in terms of the quality of learning and the out comes that we’re trying to achieve in university setting. And looking ten years out, what does this possibly look like? I’ll briefly apologize for the deluge of text you’re getting on the screen. I’m not going to read but I’m touch on them on a few points to make my
point. I’ll pause this, I did this a couple day workshop last week, you know, at Harvard. And I brought up a couple of slides. And someone was violently angry with me to the point of shaking because there was a fundamental disagreement with what I
had to say around the related humanist that we’re addressing. So keep in mind if you’re shaking
with rage, you’re not alone. I’ll pause it that we are living in something like a post learning age. And ore the next few slides with lots of text I’ll flesh out what I mean by a post learning age. But basically we’re getting to the point where we’re co-working with agents. We have this human cognition that is being off loaded to artificial agents. And we are increasingly needing to rely on things that technology doens’t do well. I’ve skirted that issue but I’ll start fleshing it out now. So here’s my logic, it’s about four slides of logic. So first of all learning is a constant human activity. Like we can’t not, not learn. Like from the time we are born we’re learners. We’ve done in the past created institutions that have been able to pass one learning to the next. We would put our knowledge in books. We would have over the last 1500 years, we would have systems of learning where we could take what a society has learned and bring it into the minds of
a student. However when information increased in quantity, what ended up happening is we had to say, you know what, we can’t do this anymore with the human brain. We’re going to use a bit for lack of
a better word a intellectual prosthetic. And we’re going to store things this scrolls. Print them in books and we’re going to create classification schemes that aggregate basic knowledge up. For example with the development of encyclopedia. Basically that concept is way too much crap out there to know. Let’s give you a paragraph rather than a book. And it was a way of sort of abstracting up so that we could understand the idea but not dive incredibly deep into it. Now, over time, as we’ve continued to see this growth materialize it’s gotten to the point where we’re recognizing that maybe our existing
practices are somewhat incomplete. Now in today’s world given the human growth and interest in the growth of data and data scientists, we responded by using tools and technologies that are allow us that gain insights from it. So insights from the terabytes of data that are generated per day on Amazon. What does take tell us about
human choice and human decision making? So, because we can’t make sense of that ourselves. We can’t go through trillions of lines of click stream code and make sense of what it means. We have had to develop abstracting and computation natural mechanisms that can gain
insight into that. And it’s also given us the ability to begin to overlap with AI conversation as well. So, the sophistication then is possible which is where we are now, this can happen in a more
effective way when we just say, all of that complex growth of data from weather pattern data to climate data to ocean pattern data. We can not function by doing that in the human mind or even in a physical form. We must, in order to understand that, it must be managed computation naturally. Which is a key issue. Technology creates problems that only more technology can solve. Our growth of data through these tech logical means produces new ways to work with that data. So, if humans out learn technology in many ways, then enter this space where we’re dealing with tools that are smarter than us. So the definition I’m providing is that a post learning era is one where traditional learning is performed an exceeded by technology and existing institutions are not adequate for handling or addressing that particular challenge. What do we do. We can you recall up in a corner and weep. Or we could look and say where do we store our most sophisticated knowledge? I think in oh lot of ways our greatest sophisticated knowledge is not stored in books or classroom curriculum. It’s stored in cultural values and cultural norms. It’s stored in how we make sense of the world. It’s stored in how we make sense of this range of technical changes that we face. What does it mean for humanity. What does it mean to be hopeful in periods of immense technological and societal upheaval. In that conversation we come to a critical information which is-what’s doer consume. The abundance of information that you have on your laptop, you have on whatever device you use, what is that information consuming and Simon says it’s assuming attention. Which is perfectly understandable because if you have a limited attention span, and you go onto Twitter and you choose to like or reTweet or rage at someone, you’re less likely to respond to someone who has
a thoughtful opinion, rather than someone says something ridiculously ass nine and off the charts. You’ll respond that, because there is more of a visceral or sort of emotional reaction. I’ve seen sane and healthy colleagues go on Twitter and feel the best way to change the direction of U S politics is to reply to hypothetically the President. And to get certifiably upset at that byproduct of interaction. So, social media by and large has a negative impact on our mental health partly because it uses the information that I will listen it’s motions. And the platform that have been developed have been developed for a reason. By the way there is a study out of U K recently. Five major platforms, only one resulted in positive mental health benefits. Here’s the platforms. We’ve got Facebook, we’ve got snap chat, Instagram, Google, or YouTube I should say an Twitter. Any guess on which one had a positive value on mental health.
I heard a few. But let’s say you said YouTube. You’re right. Great job. On the other hand, the worst one, particularly for young women was Instagram. That had the worries impact because of the social comparative
dimensions. So any ways that’s what information consumed.
What we want to do is if we know we’re attention scares and we don’t want to be ruled by the
algorithms that present us with the most controversial most provocative types of statements we want to shift into something that is
a little more structured from a learning perspective in items of being intentional, mindful and focused on ways that we can make sense of the world around us. So, I articulate that sense making is that particular framework. It’s that kind of a model that we need to turn our attention to, where our curriculum should reflect not learning specific consent per se, but being able to understand the relationship of concepts into other dynamics. Or put another way being able to lableize and experience what’s happening. So different angle it’s the ability to say, rather than living in the dying log of what’s changing it’s starting to move into the dialogue of what are we becoming? Like who are we becoming. So let’s so than saying what tweeted what today and more so a re flex of what do those dynamics. Another is it’s this process of meaning given. And how do you reflect that into academic curriculum and how do you reflect that in a university setting where you have students in your classroom that are exposed to a ridiculous array of information. Parts of it inaccurate. And they were working with technologies that are automated behind the scenes giving them their daily dose of information jolt. And that’s been derived based on their previous click patterns. So if you accidentally once, not that this happened to me, click on a story by Jennifer Lopez, you will for months receive updates on apple news regarding Jennifer Lopez. She’s doing great by the way. This is a dynamic that your students need to be aware of. Because there is a cognitive underpinning to this. You’re being, the system is using you in many ways as much as you’re actually using the system. And so we turn to a range of strategies where we socially sense make through stories and narratives. We turn to technical flows, a number of ways where this starts to become more of an issue, where we say we want our agent to do this. And that’s in many ways how we have to struggle or create the system. This is the slide that caused the person to sputter and be angry with me. Whereas I said some of the difference in terms. Now he felt that everything I said in sense making was actually learning. Which I disagree but that’s fine. So the difference in terms when you think of a design perspective, is in many ways, these fuzzy boundaries, these esoteric concepts that are very hard to attach quality metrics in or even design
for. Thing like coherence, resonance, narratives and
culture and the rift goes on. How do you when you’re sitting down as learning designer and you’re saying what should an automated system do with this particular learning activity? Whether it’s your bringing in a personalized learning module to help someone understand calculus concepts. Or whether you’re bringing in some kind of tool that you’re using to off load their memory to some network kind of approach. What kinds of questions do you want to ask yourself around this. And I would suggest the narrative shift toward things that aren’t duplicated by technology. That technology doesn’t do better than people. Now why does this matter? I think it matters because learning is mainly seen as being a cognitive activity and it is. And we’re not going to win in cognitive activities when it comes to artificial cognition.
Where sense making a cultural, it’s communal, it’s stories that are embedded in the narratives that we tell one another. And learning often which is problematic if we learn based on existing memory which the concepts may have out dated in a doctor for example making decisions based on previous ruse of how something should be treated isn’t up to speed on new initiatives. It actually produces wrong responses when action is taken if we’re functioning on a traditional learning mod. If the environment rapid change. What does that mean for faculty, from a number of lenses there is something positive of learning analytics. There is something positive. The ability to engage with our students and off load cognitive capabilities in teaching practices is sensible. The quality lens that is the full cycle of digital learning asking specific questions around where did the AI pieces fit in. And also the act of contribution to learning sciences are all big benefits for faculty members to think about what’s happening in their courses or
classroom. But there is a key issue at play, and it’s reflected very well for me this this idea of the illusion of explanatory depth. And there is actually an tire book written on this called knowledge, Illusion. And it ferrets out this idea that we think we know more than we actually do. Private universe 20 some years ago was published by media that produces. It asks ( indiscernible) the response to why do we have seasons, it’s in response to how close the sun is to the earth rather than the tilt of the earth access which is the correct answer. So we can go a long way with fundamental
misconceptions. This is my opinion and some will strongly
disagree. I am not that interested in do you understand or can you name the capital of a certain state. Much
more interested in can you understand the isn’t gate active dynamics of a change politically that
happened in one state and how that might impact that society more broadly. How might that impact disadvantaged populations. How might that impact the social fabric of that region. So they — to me what’s are what we understand the depth in. Understanding what the capital is, Google will get that. In many ways, has solved bar fights. Who did this, who did that. You used to be able to
argue for hours and hours. How you check
Google and so, it’s this. What she did was draw a partially drawn bike and gave it to her students. We’ve probably been on a bike and we know what it looks like. It wasn’t
completed and she would say finish this bike. She may have not have the brightest students in town, I don’t know for sure. But what they produced for bike out comes is ridiculous. And this is repeated over and over again. With someone you act with within a daily basis. How does a toilet work. Some people have fundamental misunderstanding. So it’s not that we have no depth.
( indiscernible) let the cognitive be designed for artificial agents and let the complex integrated activities that involve sense making and conceptual ideation let those be domain of learning activities. Then we want to make sure we have explanatory depth on the things that we think are most central or critical to our curriculum. Now, final several slides then. What does it mean for learning designers? Well, one way is to start looking as I touched early, we really looked at we mainly had a human knower as our object of design. And now it’s starting to say, well what parts of this process should a human knower and what should this system do. It’s a number of questions that we articulated that relates to the legacy systems and practices that we see on campuses. There is the need for systems thinking. There is the being this attribute that I’ve been trying to articulate around sense making and who we are as people. Technology as
a co-agent. And start to go look at what parts of the control system, like mentioned right up front with students I had at Red River College where half the class, that half changed and many of the faculty didn’t. Now in a practical way, what does this look like from a design end. It might look like this, where you decide we want to design and break down what is the knowledge work, what’s the artificial and the human diminishes and what is an integrated structure. For example, if I’m planning to train pilots at bowing. I would do a task decomposition of everything they’re expected to do and I would ask specific questions. Who should do that. The human or the agent. If I was to design a course in cognitive psychology for example I would look at it and say is this a conceptually consequential attribute that the learner needs to process. Sore is this something find able or discoverable. Or is this something that is computationable by an artificial agent rather than a human being. Final point I’ll leave you with a series of questions from designer perspective. What do you decide to automate and why? What are the developmental implications for the student? I can’t tell over emphasize this. There is real value in confusion. There is real value in practicing self regulation. I have a lot of issues with a lot of the nudging activities that’s being used in university campuses. Because a key long-term cognitive attribute is self regulation. And part of learning is regulate that self learning capability. How do we improve self regulation on the part of our students through the curricular process. Just because we take something away that might be easier automated, are we minimizing their longer term capability to self regulate because we’ve taken that particular attribute away. What’s does quality look like when we attempt these type of agents. And for many of you they’re not tomorrow, they’re probably here already. I assume many of you already include some kind of personalized learning systems in your curriculum design. But if you’re like most systems, they’re included not because you understand which cognitive attributes did they take over, but they’re included because a senior admin said we’re going to use this. Or someone came along and said this is the root form we’re going to use, like Alex for a certain level of instruction. So, they’re included not because they were designed to intentionally rearchitect the relationship between human and machine. They were included instead because somebody bought a product from a vendor that sort of showed up at their doorstep. So, we want to then ask questions what is the quality dynamics of this kind of relationship. Then, finally, how do we prepare students for the meta skills of automation. And by meta skills of automation. For them to be aware. I have a group of students that I finished taking through this process. It was a simple routine. What are you doing with technology and why are you doing it? All of us have a Cheetos diet around technology. It’s there, we use it. Probably not the best for me, takes a lot more time to cut up broccoli spears. So, we use tools in ways that are actually detrimental to our emotional and mental health but also in ways that are suboptimal for the kinds of people we need to be self aware, mindful an focused and so on. I’ll stop there. Any questions? >>STUDENT: (indiscernible)>>This is an awkward period. But I think if somebody up there? I think Deb has a question here. If you throw it. Right up front here. And anyone else has a question, feel free to raise your hand and we’ll try to identify you early enough to get a mic out to you. >>Thanks. So now I’m giving you all time to compose your own questions here, but.>>Raise your hand as well. >>So, you actually, some of the questions you’re asking us are the questions I actually have for you. So, he can we can turn that around. I would say, I am strict two things, comment and question. So the comment, I’m struggling a little bit with not that the idea of the focus on sense making is wrong, but how is, where does that differ? I’ve seen the faculty role as generating knowledge, curating knowledge and making sense of that knowledge. That’s what we do in higher education. I think, I think your point is much more nuanced in terms of the difference between learning activities and sense making. So I was hoping you could talk a little more in terms of how is it different from the work we’re doing now around how we think about our curriculum? I’ll let you address that and I have one more. >>Sure. Like I say, anybody else has a question, raise your hand while we’re waiting. Where did I have that slide. So, this has as much to do with how you define learning. We did a number of about a year ago is I looked at what a prominent definitions of learning. The way that we’ve typically had them in say education or educational psychology. And most of them have some variation of Driscoll’s view that learning is a change, a permanent change in performance and performance capability as a result of having undergone some experience. Or some variation of that sort. There is a few that will make a slightly different relationship that emphasize information more. But it’s the way we’ve defined learning traditionally in the learning design space and also in educational psychology. To me it became clear this there is no difference between how a machine learns, right? I mean if I look at an algorithm or model that you’ve developed. (Internet cut out).
1215
01:02:15,066 –>01:02:17,399
>>Now, I selected sense making because it was something that was vague and nebulous enough, but familiar enough to be communicated. So, one way to consider your question is let’s just say you agree with me. You don’t have to of course. But let’s say you say learning is over here. It’s a node on this spectrum. Over here we have sense making. Learning is defined by curriculum, it’s defined by the practices that we do in classrooms and so on. It’s a traditional change in view of performance or performance potiental will as a result of having undergone a experience. Sense making is over here, it has weird words like resonance and concepts like coherence and we start to deal with narratives and story. And we deal with attributes like cultural dynamics and so on. So you could say you would have to torture the traditional definition of learning a fair bit to describe how what humans do is in any way is different from what technology does. So I’m saying ten years out, learning will be largely the domain of technology entities. It is already in many areas of image recognition language and so on. What are we going to do and what should we design in our courses for people to be doing? I say it is that sense making smart, that the thing that is — (Cut out). That is different from learning the way we had it. Someone else could say I’ve learned sense makes as I was learning as my good friend did. That’s possible as well. But it’s kind of like saying, what did you have for breakfast? I had food. What are you going to have for lunch? I’ll have food. It’s so vague to me, nothing. I mean, did you have eggs and bacon and fruit? Because that tastes different than enchilada and tortilla soup. My argument is we need a richer vocabulary around what these practices that are that we want to have reflected in this your curricular activities. If the we just call it learning it’s too much like a machete when a scalpel is needed. I’m not sure if it answer your question. But the language will allow us to parse the language between cognitive and human. I am causing one. (Cut out) I’m going to call it learning 2.6 subset A. That type of activity. That’s fine. But find a way to distinguish between what the artificial agents do and what the human agents do because right now we don’t have that language. >>So that was helpful, thank you. And I think if I’m understanding this correctly, it’s consistent with some of the writings of folks, like, that’s talking about what we need to be doing in terms of educating students for a new reality, a new work worries like Joseph Yunes from Northeastern. So I think if understand you correctly, you’re actually having us think about different terminologies and different language around what we’re doing as a way to make that split, right? >>Yeah. >>Okay. So, whether it’s –>>If you could, there is a question over here while she’s asking. Oh, one there? All right. Bring one there and eventually we’ll get one there. >>I’ll save mine. I have a wrap up question for you. >>Sure, sorry to interrupt. So there and then if that question could be brought to that table or that mic. Yes. >>So, I think a lot of us in the room might interpret some of this in terms of say Bloom’s taxonomy. Where we’re learning at making higher levels of learning that build upon what you’re describing as learning. Where in order to be able to competently make sense of certain areas, we have to have this foundation of learning. So, to me what happens is, there has to be some level of assessment of this type of learning for a student in order to be able to competently engage in that sense making. So I think that’s how we define at least look at our process of where we see learning going. Absolutely sense making is our ultimate goal, but we might describe that as a higher expression of learning on Bloom’s taxonomy. So could you speak to that? >>I’m not here to sort of do the (indiscernible) language game process. I mean you have your own language within your institution and within the field. I spent quite a bit of time looking at the QM frame works and QM models to make sure what I was talking about was fairly resonant. For me, the biggest question I kept coming back to, if I’m a learning designer and if I’m a faculty member and I’m trying to prepare for five to ten years in the future how should I be thinking of my curriculum, my assessment, how should I be thinking of the teaching practices, and data and other technology outputs to improve the overall experience of the student. That was the orienting framework that I brought as I thought through the presentation today. The where I landed and whether you use Bloom’s taxonomy, that’s fine. If there is some way you can tell me or not me, your student, your designer process, what is the outcome of one approach that is better done by an AI system, versus one that is better done by a human being. Now, you could say from bloom’s perspective, that once you get up to synthesis levels, there is more complex integrated creation type work going on. If that gets you to the point where you have a mechanism to decide between should a human know this or should a cognitive agent support the human in that development? That’s where I think I’m trying to land. >>(Off mic). (indiscernible).>>Some of you may not have heard. It’s a big room. The point made by the gentleman it’s not an either or. It’s the developmental or stage process moving toward. And I think my view was a spectrum. I’m not saying you’re either learning or sense again. That’s the problem with tables it doesn’t reflect that there is different profiles or attributes. But it is worth thinking about does a person need to know or acquire the same kind of things with in the a world with AI systems as they’ve had to do in the past? Is basic knowledge this a world where, none of us probably do calculus by hand. None of us do regression analysis by hand. Right? Many of us use an agent or platform where we bring the data in. We manipulate at extremely rapid paces and we have an output. We’re willing to trust this cognitive system enough so that we don’t understand core things because we understand that this is the output. So, somewhere in there, whether you’re talking bloom’s or whether you’re talking spectrum. I think we’re similar in our orientation on it. With one caveat on my end, what should no longer be taught when we have agents that may be able to do a lot of the basic acquisition work. Over here, I think we’re getting close to out of time too. Did that answer it for you or not? >>(indiscernible).
So the question raised by the lady was, how do we assess these things? And need less to say, that is a massive area of interest. How do you assess soft skills. How do you assess employability skills, so called 21st century skills? There is a fair degree of interest in that conversation now, even though nothing is effectively settled on what that should look like. When we bring in psychological instruments that allow us to start assessing some of those attributes we get big brother-ee real fast. And that raises much more significant bias questions. Was there a final question here or? There is a question back there. Yep? >>With, this question will be asked intentionally vague but shorten it up because I know we have other sessions to go to soon. But with the concepts and the terms that you introduced today to include algorithm, artificial intelligence, sense making, learning and so forth. Could you tie in the human psychological concept of heuristics, whether it be available, representative or anchoring heuristics and how that all intertwines together? >>That’s a great question. And completely separate presentation in terms of being able to answer that meaningfully. I think we do need to focus explicitly on some of those questions. If you don’t mind I’ll broaden it a little to bring in cognitive bias as well. Cognition is part of it but how do we broaden that conversation to meaningfully address that nuance when we’re making decisions. How we process data when we’re making decisions, the bias we bring into those dynamics starts to become functional. Or some of common work some of those bring into street heuristic, from our different thinking, the thinking fast, thinking slow kind of dynamic. Deep and meaningful. Our human cognitive processes are very creative if you will. Use the availability heuristic as illustration. Or regency bias or other attributes that come in. We are not at all like a computer in our thinking. Sometimes what schooling does it’s a bit of an issue, it makes cognitive processes structured and routine which surprisingly makes us uniquely replaceable. So, I think understanding more about the sloppiness and the innovation with which we do our thinking practices, is a largely out standing question still. And whether that gets put onto the larger term domain we still need to ferret out how does a fuzzy logic system begin to take advantage of what has been established human biases or heuristics of thought, that maybe an interesting next step. But it’s certainly a relevant and complex question. >>Okay. Just as a wrap up then since I think we’re getting close to time. So, we’re about to leave here, finish out the conference and go back to our institutions. So, to the extent that we in this community serve as change agents within their own institutions, especially those that have begun the work to look at how do you integrate AI and machine learning in education. What, what are the top couple of things you would have us go do when we go back to our institutions to be able to, you know, help the work that we do move in the right direction, in your opinion? >>Well one is to recognize again like I said earlier, it’s not a future state. It’s there already. You already have a lot of cognitive activities in your personal interactions with information on your phones and your technologies. The institution is already making decisions around cognitive artificial agents. I think one of the first thing is put a stake in the ground and raise the likelihood of a conversation. Meaning we’re sleep walking into this automated future without explicit decision making. I think that’s one. Recognize it’s there, it’s happening it’s developing. I think at some level we had a period a number of years ago, I think it’s still there, where everybody should be a programmer. Which is a fine conversation to have, but and I’m not saying that people shouldn’t be a programmer. But I’m saying people should understand how an algorithm works. They should understand what happens when you automate something. They should be conscious of the social dynamics and the impact. Part of that involves getting conceptually familiar with the concepts of AI. There is a range of books, there is a good one just came out I mentioned Mitchell’s text that she published, I think it came out last week or so. I went through it on the weekend. It’s solid introduction to AI. Gary Marcus has a good one on rebooting the AI. Which is a push back on deeply learning models that are prominent. There is another one architects of intelligent, I think Martin Ford published that. Which is good to look at as well from a range of different voices and so on. I think in addition to raising the awareness that it’s happening and starting to engage with that intentionally. Secondly is getting your own familiarity with the language because much of what you hear as AI is what I said earlier, why it is, it’s technology that we don’t understand as AI. So there is a lot of questions that arise around that. Finally I would definitely try to advocate for starting a on-campus conversation what this means. It is something we need to think about today but it is something we need to prepare for in the long run. Whether that’s communities, online communities or special interest groups or something to get the conversation going. The end. (Applause). >>We’re about at break time. George, just like all of us, are wearing several hats and stuff. He has to go back and teach a class, but he has graciously said he’ll come back when he’s done in the afternoon. So if you have some burning questions that you didn’t want to ask in public or you want to catch him and have a conversation.
(End of webinar).

Leave a Reply

Your email address will not be published. Required fields are marked *