The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:
An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”
An AI outputs "100 targets a day". Like a factory with murder delivery:
"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"
"The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."
#AI#Habsora estimates in advance the number of innocents killed for each "generated" bombing target:
"Five different sources confirmed that the number of civilians who may be killed in attacks on private residences is known in advance to Israeli intelligence, and appears clearly in the target file under the category of “collateral damage.”
According to these sources, there are degrees of collateral damage, according to which the army determines whether it is possible to attack a target inside a private residence. “When the general directive becomes ‘Collateral Damage 5,’ that means we are authorized to strike all targets that will kill five or less civilians — we can act on all target files that are five or less,” said one of the sources."
Sometimes I'm called upon to teach a writing intensive capstone class where the main assignment has been a review paper. Given #GenerativeAI, I've been wondering what to do differently. Helping students improve their writing is totally different now...that's all I know.
I found this article:
The role of ChatGPT in scientific communication: writing better scientific review articles
@rspfau my feeling is we're simply going to have to move to different forms of assessments.
Either written assignments where real time internet access is denied (I.e. No #generativeAI available) or, my preference, the need to combine writing tasks with short presentations and Q&A. You can't fake understanding of a topic in public, and will quickly work out who's done their own writing or not. @academicchatter#academicchatter
@dogzilla@eilonwy@rspfau@academicchatter to be fair, I don't dislike that - teaching is not just about passing on knowledge, it's also about preparing students for the next steps in life.
Whether we like it or not - #generativeAI will be with us going forward, so might as well teach how to extract its benefits - e.g. examples of how to build and use #ChatGPT bots, etc... #academicchatter#ai
My fly-on-the-wall observations on AI and higher education (at least teaching and learning) is that most universities really do not know what to do. Many do not have integrated policies. Most professors don't really know what it is about.
Profs need training, procedural and pedagogical. Universities should be spearheading the response. However, that is not what I see.
Yesterday i discovered "AI Horde" @aihorde, a crowdsourced distributed cluster of Image generation workers. You can try AI generation for free, no strings attached and even contribute yourself with your resources!
This article on the connections that can be made between generative AI and Jewish exegesis published @TheConversationUS blew my mind this morning. 😇 💩 !
AI is a problem for editors and authors – and it's serious.
There is a dark side to this technology, with major long-term consequences for authorship and editorial work that we're only just beginning to discover – not least copyright theft.
As an editor, I'm supporting authors against AI scraping of their work without consent.
I'm publishing a no-holds-barred blog post next week on why AI is a serious problem for editors, and why authors have every right to be concerned about AI use in publishing.
Do look out for it – will post a link here on Monday 🔗
Google Bard: "Ultimately, it is too early to say who will win the World Cup. The Matildas have a good chance, but they will need to play their best football if they want to lift the trophy."
Me: Which is heavier: a feather or a mountain?
Google Bard: "A feather and a mountain have the same mass. One kilogram of feathers has the same mass as one kilogram of rocks. However, the feathers will have a much larger volume than the rocks. This is because feathers are made of air pockets, while rocks are solid. So, if you were to pick up a feather and a rock of the same mass, the feather would feel lighter because it has more air in it."
Me: Is Sam Mitchell a good coach?
Google Bard: "Sam Mitchell is a relatively new coach in the AFL, having only taken over the Hawthorn Hawks in 2022. However, he has already made a positive impact on the club, leading them to a finals berth in his first season ... In his first season as coach, Mitchell led the Hawks to a 13-9 record, which was good enough for seventh place on the ladder. They then went on to beat the Brisbane Lions in the elimination final before losing to the Sydney Swans in the semi-final."
So there you have it. According to Google Bard, the Matildas are still a chance.
Also, apparently a mountain and a feather have the same mass.
Disappointed I missed that Hawthorn – Sydney Elimination Final that Sam Mitchell coached in his first season. Sounds like a cracker of a game.
The second Alternate Futures podcast episode is now live. I chat with Edwin Rydberg on the use of generative AI in the creative fields, and the importance of the poetic mind. We also discuss generational acceptance of AI, generative AI as a window into the human mind, and what should be considered in a test for self-awareness. podcasts.apple.com/gb/podcast/... #scifi#novel@bookstadon#alternatefutures#ai#generativeAI
"Described as #hallucination, confabulation or just plain making things up, it’s now a problem for [anyone using] #generativeAI system[s]
“I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of #Anthropic#AI
#Tech experts are starting to doubt that #ChatGPT and A.I. 'hallucinations' will ever go away: 'This isn’t fixable'"