FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Ian Linkletter’s Call to Action at Reclaim Open

Image of Ian LinkletterWe’re honored to have edtech’s patron saint of resisting student surveillance, Ian Linkletter, join us on July 24th at 12 PM Eastern (9 AM Pacific) to discuss the crucial role of ethical edtech in our current moment. As many of you already know, Ian’s experience battling Proctorio’s SLAPP lawsuit has provided him a uniquely personal perspective on the tremendous costs and dangers of ceding control of higher education’s mission to greedy, unscrupulous vendors. His cause is that of anyone who believes education is not only a fundamental right, but provides the basic toolkit for every citizen to battle tyranny. Join us at https://watch.reclaimed.tech/reclaim-open for Ian’s call to action advocating for ethical edtech in the work we do.

Generative AI & the taste of sweet surrender

By: mweller

I’ve attended a lot of AI talks recently (I mean, even if I tried to avoid them I would still have racked up a few). And here’s my hot take for education – just go for it. I don’t say this as an AI enthusiast, I find it quite boring and kind of soul sucking, but shouting loudly and hoping it will go away isn’t a viable strategy. As I argued in my last post, it has a strong inevitability factor, and lack of engagement risks doing ourselves and our students a disservice.

That’s not to say we shouldn’t fight to make it open, to avoid bias in datasets, and be critical of its output – we absolutely should. But also let’s bring it into the fold in higher ed. It’s a form of knowledge tool after all, why would we be against that?

There is a lot that could be good about it from a student perspective. AI infused higher ed may be more democratic – everyone has access to advanced smart tutorial bots, not just rich kids (although I’m sure capitalism will find a way to make it exclusive some how). Gone is the need for expensive textbooks that disadvantage poor students. AI tools can help students in many ways – Mike Sharples has a good table of possible educational uses in this UNESCO publication. They seem like good things for students.

We will need to teach students to critically evaluate and assess the quality of AI content as it’s going to be everywhere when they leave higher ed. Also how to make the best use of AI, for instance, developing skills in being prompt engineers will be valuable across all disciplines. Making the campus an AI free zone is the same as when we used to pretend students wouldn’t access the web. Almost every career or past time is going to be touched by it in some way, it’d be doing the dirty on our students to pretend that it doesn’t exist, like when we used to make them pretend they didn’t read things online and only in print.

If your assessment can be passed by AI, then change it, what comes out may be more meaningful for students. AI assistants that can help with navigating the University system will alleviate a lot of stress. Getting students to use AI as a tool might be fun and shifts the power dynamic around knowledge. Rethink your curriculum and discipline from an AI rich perspective. All of this is easy to write, but difficult and costly to do.

Then concentrate on the bits AI doesn’t do well. Focus on the social, the connections, the meta-cognitive skills, sympathy, care and human aspects of education, which have been affected by an over-systematisation of higher ed. Ironically the robots may make it all more human again.

To emphasise I don’t say engage uncritically, indeed that criticality and holding to account is a key reason to get involved. Otherwise a version controlled an determined by antisemitic idiots who own large tech corporations will be the only option. But there are real benefits for students to be had, I believe, and we need to be ensuring these are at the forefront of what universities do with the tech, because otherwise we’ll get sold all sorts of faux solutions to fake problems.

After fretting about it and how it can be stopped, I have now embraced the sweet bliss of surrender. I’m just not going to, you know, do any of the actual AI stuff, because it doesn’t really interest me. But there are plenty of good people out there who are excited by it, so have at it, we need you.

The inevitability, or otherwise, of ed tech

By: mweller

In Metaphors, I have a chapter about VAR (Video Assisted Refereeing) and Learning Analytics. In it I make the case that VAR got to the point where its implementation in football seemed inevitable. Everyone (fans, pundits, players, not sure about referees) wanted it – mistakes were made by refs, and then analysed in detail in the studio by pundits with access to multiple high definition camera angles. It seemed ludicrous that the ref, who was actually making the decisions, shouldn’t have the same access. I go into some of the problems with the actual implementation in the chapter, but I want to revisit that idea of ‘inevitability’ in this post.

The supposed inevitability of a particular educational technology is a tactic often deployed by those with a vested interest in its adoption. After all, what better marketing ploy than to have people believe that a particular product will be adopted regardless of what they do. It creates an urgency, not to say a panic, to adopt now, and invest heavily. “This stuff is happening whether you like it or not”, is the cry, “miss out and your competitors will be so far ahead of you, that you’ll never catch up.” If you get that belief circulating widely, then sit back and watch the bucks roll in.

Of course, nothing is inevitable – we could be hit by a meteor tomorrow, or collectively decide to revert to an agrarian existence. But some things are certainly more probable than others, and the inevitability argument can be made more strongly for them. Back in the 90s I made a version of the inevitability argument about the web in education. Not everyone agreed, but I think some form of online education adoption in education was pretty much guaranteed to occur at that stage.

Perhaps a way to think about inevitability, or likelihood, is to imagine five years hence from now. If you then rewind it to this point in time, in how many instances is that technology adopted widely in education? Sort of like Doctor Strange figuring out the probability of success for the Avengers in the multiverse. Discounting the more outlandish versions of the multiverse, where aliens land, or we get turned into mushroom infested zombies, play the mental game of how often that technology gets implemented. If it is the case in nearly every instance, without some major alteration to current society, then we can consider it largely inevitable. If a different set of pathways is easily imaginable, then less so. Note, this is different from saying whether you want it be inevitable or not, just how much likely it is to happen.

I think the web in 1995 comes out pretty consistent across these different futures, but other technologies I would say have a lower inevitability score, despite what their proponents claim. MOOCs for instance, weren’t really guaranteed to be adopted around 2008, they caught a particular wave of imagination. I wish OER had a higher inevitability score, and one can imagine alternatives where it is more universally adopted. Some form of mobile learning might also be argued to have a reasonable inevitability ranking. Blockchain? The Metaverse? I’ll let you decide.

My hunch is that inevitability is influenced by two main factors: how much the technology is developing regardless of adoption in education, and how much there is a demand for it by learners. Look at the web here, people were using it, many wanted to use it in education, and it was developing rapidly. All of this brings me onto the current inevitable ed tech du jour, generative AI. It scores high on both those factors: it is developing apace regardless of whether education gets involved or not. And students are using it widely, often despite barriers put in place by universities to stop them. So, yeah, whether we like it or not, I’d say it looks pretty inevitable for education in some form. In which case, better to be using it how we want, which I’ll come on to in the next post.

Proceedings of the 2022 Connected Learning Summit Released

On behalf of the Connected Learning Summit Conference Committee, we are pleased to announce the publication of the Proceedings of the 2022 Connected Learning Summit.  It is our honor to share with you a proceedings that celebrates participatory, playful, and transformative learning. In 2021, the Connected Learning Summit became a fully online event, supporting inclusive,... Read more »

The post Proceedings of the 2022 Connected Learning Summit Released appeared first on Connected Learning Alliance.

Smart motorways and lessons for tech adoption

By: mweller

In the UK at least, the implementation of smart motorways has been a curious story to follow, and I think in its roll-out and reaction there is much that can be learnt for technology adoption across higher ed.

First up, what are “smart” motorways anyway? They are “a section of a motorway that uses traffic management methods to increase capacity and reduce congestion in particularly busy areas.” There are three types of them:

  • Controlled Motorways – these keep the hard shoulder but have additional technology such as variable and mandatory speed limits to control the speed of traffic. They can also be widened sometimes to have an extra lane.
  • Dynamic Hard Shoulder (DHS) Running Motorways – these temporarily increase capacity by utilising the hard shoulder at peak times. They also have emergency areas (EAs) providing a safe place to stop in an emergency.
  • All Lane Running (ALR) Motorways – these apply permanently convert the hard shoulder as a running lane to increase capacity. They also have emergency areas and stopped vehicle detection (SVD) technology. It is this version that many people think of, and which has attracted the most criticism.

A lot of money (and traffic queuing due to disruption) has been spent on implementing smart motorways, but the rollout of the last category of ALR smart motorways was paused following safety concerns. The Daily Mail and Liz Truss are strongly opposed to smart motorways, which makes you feel there must be some merit in them. So let’s look at the issues and any lessons they may for ed tech.

Emotion can trump evidence – the first round of safety data reveals that of the three smart motorways types, the ALR ones are lowest for personal injury collisions, but highest for killed or serious injury collisions. So, it seems that they’re very good at controlling traffic to prevent bumps and shunts, but if someone breaks down and uses the hard shoulder to stop, then it can be fatal. But, ALL versions of smart motorways are safer for all types of collision when compared with conventional motorways. However, much of the reaction against smart motorways is an emotional one, people feel safer with a hold shoulder (I know I do), and it may be that no amount of hard data can overcome this.
From an ed tech perspective this means that you shouldn’t underestimate the emotional reaction to technology. This can be both simultaneously perfectly valid (getting killed on a hard shoulder), and irrational (the data shows they’re safer than the motorways you’ve always used). Ed tech can feel threatening and raise legitimate concerns (eg lecture capture and professionalism) but also be irrational (the lecture model was never that great to start with). Balancing these two takes careful negotiation.

The push and pull of ‘smart’ – adding ‘smart’ to something makes it immediately appealing to many people. Smart cars, smart phones, smart motorways. The combination of smart cars and smart motorways might well end up being safer and keep traffic running more smoothly, but removing the hard shoulder doesn’t seem very smart when you need it. For other people (eg Daily Mail readers), adding smart to anything may as well be calling it woke.
From an ed tech perspective we’ve witnessed the lure of the latest thing many times, with the ‘get on the bus or lose out” mantra for MOOCs, AI, learning analytics, etc. These can cause both camps to retrench to very pro and anti views. The language around this stuff is important.

Efficiency as priority – smart motorways are quite appealing to accountants and planners. There is this lane that is unused on nearly all motorways. We could increase capacity, reduce queuing, improve air quality simply by opening it up. Duh, why did no-one think of this before? But many systems need inefficiency built into them to deal with unexpected circumstances. You don’t need a hard shoulder, until you really need a hard shoulder. An inefficient system is not necessarily a bad system.
Higher education has a lot of inefficiencies in it, and the appeal of ed tech is often to eliminate or reduce these. Sometimes that’s useful, but often what appear to be inefficiencies are deliberately inbuilt systems of care.

Short-term solutionism – part of the drive for smart motorways is to alleviate road congestion. Part of the reason we have road congestion is because of a failure to invest in public transport and a nationalised system that makes it a priority and affordable. It’s simply easier and much cheaper to drive most of the time. However, when Daily Mail readers bemoan these bloody stupid motorways, how many are also calling for their taxes to be spent on public transport? So smart motorways are a part-solution to a problem, but they’re not the fix.
There are of course many ways we could apply this to ed tech. For instance, allying with third party content vendors may be a good fix for immediate online delivery, but doesn’t develop staff expertise.

Over-promise of a reasonable idea – in general, smart motorways are a sensible thing to do largely (although I’m not convinced by the ALR versions) and will be part of a more data driven network. But the promise of them has been oversold somewhat, at least in the short term, and there is a reaction against this.
Of course, we know that ed tech is never over-sold (ahem), but if it were to happen, then what might follow is a backlash and growing cynicism around ‘next big thingism’ instead of a more student-focused appropriate application mindset.

Smart motorways are probably a good idea in the long run, but they need to be implemented carefully and with an appropriate eye on the data, while simultaneously investing in better forms of public transport. You can slot in the ed tech equivalent and higher ed version into a version of that sentence for likely any tech (try it with ChatGPT for instance).

What Happened? The EdTech Pandemic Podcast

I want to get this out of my head and on to the blog because I had this idea while in conversation with Reclaim’s Pilot Irwin and Occidental College’s Jacob Alden Sargent yesterday, and after sitting on it for more than 12 hours I think it could be interesting. The idea is pretty simple: channel my best Terry Greene and talk to folks about their edtech pandemic stories, and the subsequent fallout. This is a topic that came up again and again while traveling with Brian in February, and it’s no secret the sector was hit particularly hard, and seems many are still shell-shocked professionally (not to mention the broader personal toll). This was already one of the themes I planned on writing about from the road trip given there was a tentative sense of trying to move on.

And yesterday while talking with Pilot and Jacob, the impact COVID had on Jacob’s edtech group came up once again and the stories are powerful and important. Sounds very much to me like folks are still trying to make sense of what happened. So, in that spirit, I would be interested in just talking to people about the the impact of COVID on their edtech affiliated group(s). How did it play out? What was the aftermath?

In other words, “Jim Groom, what happened?” said in my best Dr. Oblivion voice. I understand folks may want to avoid this topic like the plague (pun intended). I also understand this may be a terrible idea. What’s more, someone may already be doing it, or want to do it, and in that case go for it. But if not, and folks are interested, I would love to have some conversations with any interested parties about their edtech group’s institutional story during COVID, as well as get a sense of where they are now. Maybe I’ll have a couple of folks interested from OER23, and I would love to put together a bit of an archive to capture these stories before they get lost in oblivion.

Building Community with Discord

Image of Reclaim EdTech GIF featuring a TV with Discord log in front of static

Next Tuesday is the start of the free Reclaim EdTech Flex Course focused on building community on the social platform Discord. In the first episode airing next week on April 4th at 12 PM Eastern Lauren Hanks and I will discuss some of the inspirations that led us to using the social platform Discord in combination with the live streaming service Streamyard to run our sessions for OERxDomains21.

Image of the OERxDomains21 Schedule

MBS’s TV Guide-inspired design for theOERxDomains21 Schedule

That experience was so amazing on so many levels, and it led us to use that same combination soon after to run our Domains Workshop for admins in June of 2021 (still referred to as the Reclaim Roadshow). Lauren built out a template for the work she did during OERxDomains21 in Discord, as she will do, and over the course of the next year we used that for our workshops.

Our first full blown virtual workshop after OERxDomains21 integrating Discord and Streamyard to great effect

But we felt like we missing out on some of the potential for more sustained interactions and connections Discord offered, so we started imagining what it might be like to hold more regular events for our community in an always-on Discord server to start getting more intentional about outreach and community building. And in April 2022 a whole new division of EdTech at Reclaim Hosting was born using Discord and Streamyard as our primary means for running regular workshops and flex courses. We learned a lot in our first year and we’re planning on unveiling what’s in store for year two of Reclaim EdTech at Reclaim Open, but in the interim you should really join our Discord server next week to see how you, too, can build community in Discord!

Leading with the Art

One of the smartest things Reclaim Hosting has ever done was to partner with Bryan Mathers for our art. The prospect of sitting down with Bryan to collaboratively brainstorm how to communicate what we do visually is an absolute joy, I cannot recommend the experience highly enough. Our brand was in many ways grounded in his first Reclaim Hosting vinyl logo Bryan doodled while we were chatting in Barcelona back in April of 2015. It was love at first sight. And that has since been the basis of an entire Reclaim brand that has been consistently fun and fetching.

Original Reclaim Hosting visual Bryan Mathers doodled

We have used him innumerable times since then to prepare our various products as well as teaming with ALT to build out an entire conference theme for OERxDomains21—which may be a high-water mark for our collaborations. Anyway, Bryan is amazing and few months ago we worked with him to not come up with an aesthetic for our upcoming conference Reclaim Open (which we have yet to unveil in its entirety) as well as try and upgrade our main site header image to subtly suggest the changing nature of our business thanks to the advent of Reclaim Cloud—which is now almost 3 years old, which is insane to even think about.

A sneak peek of some of the Reclaim open art in technicolor

As you can see from the header old reclaimhosting.com header image that we had on our site for close to 8 or 9 years, the VHS tapes were applications that could be easily run within cPanel, such as Drupal, WordPress, MediaWiki, Scalar, Omeka, etc. All supporting the “Zombies” home recorded tape 🙂 All of this not only assumes the coming of the copyright zombies and the learning undead, but also that you would be “playing” these tapes within cPanel.

Old Reclaim Hosting Header image

Our new header image sticks with the VHS metaphor (although it might be begging for a laserdisc or DVD makeover here soon) but cPanel is now just one of the tapes you can play. What’s more, several of these tapes (namely Scalar, WordPress, Omeka, and WordPress) can be played in either cPanel or Reclaim Cloud, while others (like PeerTube, Ghost, and Docker) are exclusive to the new Reclaim Cloud containers. It begs all kinds of format and backward compatibility parallels that the old media metaphors real map on our shift from an exclusively cPanel shop to a much broader library of applications for folks to choose from.

Image of Reclaim Hosting's new site header

Reclaim Hosting’s New Site Header

Again, Bryan works on so many levels that are both inviting and soothing while at the same time epitomizing these subtle, yet crucial, changes we are trying to communicate to our community. What’s more, all of these tapes on the Reclaim shelf fall under the aegis of a new element of Reclaim Hosting rolled almost a year ago now: Reclaim EdTech. And that tapes sits horizontally above the other to highlight an umbrella of support for both cPanel and Reclaim Cloud. It’s so beautiful!

You’ll also notice nothing changed with the “My Blog” tape, that one is sacred 🙂

Call for Proposals Now Open for the Sixth Annual Connected Learning Summit

Call for proposals now open for the sixth annual Connected Learning Summit, which will run entirely online from October 26 to 28, 2023. Submissions open: March 6, 2023 Deadline for submissions: June 23, 2023 Notifications: August 2023 We invite submissions of Research Papers and Showcases that focus on digital technologies for learning, educational and commercial... Read more »

The post Call for Proposals Now Open for the Sixth Annual Connected Learning Summit appeared first on Connected Learning Alliance.

ChatGPT Wrote This Article and then Totally Stole My Job!

As I outlined recently in my “e-Literate’s Changing Themes for Changing Times” post, I am shifting my coverage somewhat. I’ll be developing and calling out tags I use for these themes so that you can go to an archive page on each one. This one will be listed under the “AI/ML” “third-wave EdTech,” and “future of work” tags.

I’ve been fascinated by the rapid progression of ChatGPT article fads:

  1. Look at this weird thing that writes stuff!
  2. I asked ChatGPT a question—and here’s what it answered!!
  3. I asked ChatGPT to write this article—and it totally did!!!
  4. Students could use ChatGPT to student essays write essays!!!! End of the world or totally awesome?????
  5. I asked ChatGPT for suggestions about preventing students from using ChatGPT to cheat—and it gave me five great suggestions (and five terrible ones)!!!!!!

Waitaminute. Let’s back up.

Students finding easy ways to cheat is not exactly a new thing. Remember, “to Chegg” is a verb now. Let’s back up to fad #3. Writers are asking ChatGPT to write their articles, publishing those articles, and then advertising that the articles published under their by-line were written in 30 seconds by a machine.

Do they want to get replaced by an algorithm?

It seems to me we’re thinking about the problem that these algorithms present in the wrong way.

At the moment, ChatGPT is a toy

Language-generating algorithms ChatGPT and their image-generating cousins are toys in both good and bad ways. In a good way, they invite people to play. Anyone, whether or not they know anything about programming, can explore the possibilities and limitations of a certain kind of cutting-edge AI. It’s fun. And, like all good games, you learn by playing. You can learn what they can and can’t do well. How to write a good prompt. And so on. They make AI more accessible and, in a certain way, more understandable.

They’re toys in the bad sense because they weren’t designed for any particular purpose and tend to do weird things for any given task. For example, I tried to get DALL-E 2—ChatGPT’s image-generating sibling—to create a retro comic book cover depicting ChatGPT as a scary vampire. I tried half a dozen different prompts. The best result I got was with “An evil vampire on the cover of a retro comic book called ‘ChatGPT'”. Here’s the best image it came up with:

“An evil vampire on the cover of a retro comic book called ‘ChatGPT'” as interpreted by DALL-E 2

The algorithm does evil comic-book vampires pretty well. Spelling a word that was explicitly spelled for it in the prompt? Not so much.

Remember, a large language model algorithm like the one behind DALL-E 2 and ChatGPT don’t understand anything. It’s taking a bag of words and putting together pieces it found on the internet to construct something that seems like an appropriate response.

ChatGPT, for example, is notoriously bad at math. Because it doesn’t calculate. That’s not what it is designed to do. Its developers have been working on improving this limitation. So I asked it about how good at math it is:

Me: How good are you at math?

ChatGPT: I am able to perform basic arithmetic operations, such as addition, subtraction, multiplication and division. My training data includes mathematical concepts and I have been trained to recognize and respond to arithmetic questions. However, it is important to note that I may make mistakes in complex or unconventional mathematical computations.

Me What is 2794 + 83948?

ChatGPT: 87,742

That answer is incorrect.

So ChatGPT, the miracle of modern technology destined to destroy all education, is not as “smart” as a calculator. Note that you can ask Google or your Apple search prompt the same question and reliably get a correct answer. OpenAI (the makers of OpenAI) are likely trying to combine much simpler but mathematically reliable algorithms with the GPT-3 algorithm that underlies ChatGPT and DALL-E 2. The fact that they haven’t solved this problem yet suggests it is non-trivial. These different types of algorithms can be challenging to mesh. Many of the more impressive applications of artificial intelligence and machine learning are made possible by an ensemble of different algorithms working together, some of which fit together more naturally than others.

Don’t get me wrong; the math problem will be solved. The OpenAI engineers will crack it just as they are now pretty good at preventing ChatGPT from making stock predictions or answering questions about current events when its data set hasn’t been updated since 2021. But it’s a game of whack-a-mole. Because you can ask ChatGPT anything, people do ask it anything. The creators are learning a lot about the questions people ask and what can go wrong with the answers. This new knowledge will help them design more specific solutions. But a general-purpose prompt tool like ChatGPT will be hard to make good at solving any one particular problem.

I’m not convinced that ChatGPT, as it exists today, represents a big leap forward in essay cheating. It has length limitations, has to be fact-checked, can’t produce references, and spits out highly variable quality of reasoning and argumentation. Students would learn more by trying to fix the problems with a ChatGPT-generated draft than they would by going to a traditional essay mill.

Short answer questions are a different matter. ChatGPT is already dangerous in this area. But again, students can already “Chegg” those.

Yes, but…

Could somebody write a better program specifically for writing school essays? Or magazine articles? Yes. That work is already underway.

So what do we do about the essay cheating problem? Let’s start with the two most common answers. We can develop algorithms that detect prose that was written by other algorithms. That too is already underway. So we’ll have yet another flavor of the cheating/anti-cheating arms race that benefits nobody except the arms dealers. The anti-cheating tools may be necessary as one element of a holistic strategy, but they are not the ultimate answer.

Second, we can develop essay-writing prompts and processes that are hard for the algorithms to respond to. This would be useful, partly because it would be good for educators to rethink their stale old assignments and teaching practices anyway. But it’s a lot of often uncompensated work for which the educators have not been trained. And it ends up being another arms race because the algorithms will keep changing.

We miss the point if we respond to language-generating AI as a static threat that might become more sophisticated over time but won’t fundamentally change. ChatGPT is just a friendly way for us to develop intuitions about how one family of these algorithms works at the moment. You’re wrong if you think it is a one-time shock to the system. We’re just at the beginning. The pace of AI progress is accelerating. It is not just going to get incrementally better. It is going to radically change in capabilities at a rapid pace. It will continue to have limitations, but they will be different limitations.

So what do we do?

How about talking to the students?

When adaptive learning hit peak hype, a glib response to teacher hysteria started making the rounds: “If you [teachers] can be replaced by a computer, then you probably should be.”

Doesn’t that apply…um…generally?

If all students learn is how to use ChatGPT to write their essays, why wouldn’t their hypothetical future employer use ChatGPT instead of hiring them? Why would students spend $30K, $40K, $50K, or more a year to practice demonstrating that a free-to-use piece of software does their best work for them? Students need to learn the work these tools can do so they can also understand the work the tools can’t do. Because that is the work the students could get paid for. Technology will make some jobs obsolete, leave others untouched, change some, and create new ones. These categories will continue to evolve for the foreseeable future.

At a time when students are more conscious than ever about the price-to-value of a college education, they ought to be open to the argument that they will only make a decent living at jobs they can do better than the machine. So they should learn those skills. Why learn to write better? So you can learn to think more creatively and communicate that creativity precisely. Those are skills where the primates still have the advantage.

Once we engage students openly and honestly on that point, we will start building a social contract that will discourage cheating and establish the foundational understanding we need for rethinking the curriculum—not just to keep from falling too far behind the tech but to help students get out in front of it. The current limitations of these AI toys demonstrate both the dangers and the potential. Suppose you want to apply the technology to any particular domain. In that case, whether it’s math, writing advertising copy, or something else, you need to understand how the software works and how the human expertise and social or business processes work. Whole echelons of new careers will be created to solve these problems. We will need thinkers who can communicate. Learning how to formulate one’s own thoughts in writing is an excellent way to learn both skills.

Fighting the tech won’t solve the problem or even prevent it from getting worse. Neither will ignoring it. We have to engage with it. And by “we,” I include the students. After all, it’s their futures at risk here.

(Disclaimer: This blog post was written by ChatGPT.)

(I’m kidding, of course.)

(I am able to perform basic humor operations, such as generating dirty limericks and “your momma is so ugly” jokes. My training data includes humorous concepts, and I have been trained to recognize and respond to knock-knock questions. However, it is important to note that I may make mistakes in complex or unconventional humor.)

The post ChatGPT Wrote This Article and then Totally Stole My Job! appeared first on e-Literate.

This Mastodon Will Explode, Session 1

Yesterday Maren Deepwell, Kerry Pinny, Taylor Jadin and I ran the first of three sessions introducing Mission Mastodon. “What is that?” you wisely ask. Well, it’s a collaboration between  ALT and Reclaim Edtech to provide a temporary server for folks interested in exploring the federated social networking platform Mastodon. It’s a sandbox space to experiment with signing-up, finding people you follow elsewhere, wrapping your head around content search, as well as re-programming ourselves to think beyond the large, monolithic sites that have come to dominate online social spaces.

Of course the attention economy is threatened by the Fediverse

Jon Udell’s recent post “Of course the attention economy is threatened by the Fediverse” discusses how a federated tool like Mastodon subverts the underlying attention economy driving the influencer culture most of the corporate social media giants have embraced. What if some of us aspire to something other than broadcast celebrity, what if we actually want to be social in the most generative sense of that word. To quote Udell:

I just want to hang out online with people whose words and pictures and ideas intrigue and inspire and delight me, and who might feel similarly about my words and pictures and ideas.

Hope springs eternal in this digital optimist, and those netizens like Udell who have worked hard for several decades to underline the myriad issues with networked culture while remaining focused on trying to build and promote a better, healthier web are my heroes. And I can even follow and talk to them on these miraculous networks. To misquote the Bard: “Sign me up for this brave new web, that has such people in it!”

But I digress, I just wanted to link to our first session, and thanks those who showed up in the chat and made it that much better. It was a lot of fun to talk about how Mastodon feels different (in many of the ways quoted above), but also to chat and share useful tips and tricks that might come in useful as you’re getting started.

You can watch the video here, as well as the future episodes in February and March. Also, be sure to check out the chat happening on Reclaim’s Discord, which you can access from the linked watch. I highly recommend the conversation given there some great resources shared by folks, including this list of annotated Mastodon resources Chris Aldrich has been collecting. Finally, if nothing else, watch the video above for Kerry Pinny’s 55 second intro that is absolutely brilliant! I love that my nickname is now “the Joker!”

Anyway, if you are at all intrigued by any of this, join us in this expendable Mastodon server that will explode in 90 89 days….

https://thismastodonwillexplo.de

❌