Apple Intelligence, Apple Intelligence, Apple Intelligzzzzzzzzzzzzzzzz

Chris Law

Ars Scholae Palatinae
1,072
Did anyone else get slightly irked at the endless references and constant top billing Apple Intelligence got in all 3 product announcements? I understand pushing it marketing-wise, but for one thing, it's like whiplash after they were quite sober on AI and machine learning and then it's apparently the most important thing Macs will be used for?

Now, maybe I'm missing something, trapped as I am on the rapidly crumbling ex-country known as Britain (meaning I only have access to the non-generative ApIn features as of now), but leaving my bitterness over that aside excepting the innovation in the backend (Private Cloud Compute and all that), I have to say the Apple Intelligence featureset ... well, it's underwhelming. There's not a single thing that hasn't been offered already, except I guess Genmoji. There's also the question of the ethical quagmire that LLMs and generative features grew out of. Has Apple commented on this, on whether they addressed the problematic aspects of the datasets used for training their models? I missed it if they did, and frankly I'm skeptical if they even could at this point - but if anyone has the resources to actually do this stuff (more) ethically, Apple does.

Rather than all this breathless promising followed by staggered delivery, I would have respected them far more if they had stuck to what seemed like their original strategy of waiting until the AI hypesteria died down - until maybe they could bring a single fresh idea to this frontier besides Genmoji and a more secure way of handling user PI, which is you know, cool but I would have hoped that was the bare minimum for our newly "intelligent" Apple. At least Siri will maybe come close to delivering the experience we were sold on back in 2011, I guess?

Maybe it's just me, but did anyone wait until they had access to Apple Intelligence to try out language or generative AI features? I can't see it. I'm not saying the integration won't be useful, I'm just saying it doesn't warrant top billing in the value proposition of Macs and iOS devices.

I'd be really interested to hear what you guys are thinking post the "exciting week for the Mac" (I loved the hardware that was announced on the whole)
 

jaberg

Ars Praefectus
4,114
Subscriptor
Maybe it's just me, but did anyone wait until they had access to Apple Intelligence to try out language or generative AI features? I can't see it. I'm not saying the integration won't be useful, I'm just saying it doesn't warrant top billing in the value proposition of Macs and iOS devices.
Yes, I waited. I avoided jumping on the “ChatGPT” and the rest of the AI bandwagon. Except for 15 minutes of tooling around with Adobe’s features at a user group demo. Oh, and I did use a free online portal once. To generate a few “sketch art Leicas” for a t-shirt design I was playing with. The results in both cases were credible, but I’ve been largely free of AI in my process.

So far I’m enjoying the few things Apple has brought to the table. The object removal in photos is solid — replacing an annoying lens flare with a patch nicely integrated into the bed of leaves in the foreground. I’m not interested in adding anything to my photos. A process that crosses my personal line between photography and photo illustration. (Though I might cheat and extend a the occasional background. As an “artist”, I live by guidelines more than rules.)

I’m off the beta cycle, so the Siri improvements are yet to be available. I imagine that will be an iterative process.

I’ve started to put the writing tools to work. Mostly for proofreading, but I did use them to “spice up” a short reply on another forum. I’m not guaranteeing I’ll make a practice of it but I allowed the change. Early days, but impressive enough that I already know I’m going to be annoyed that I don’t have the tools available while writing on my iPads. (As I am now.) This might be a feature that temporarily drives me Back to the Mac more often.

So no, I’m not irked. I recognize this as the start of a journey. Iteration and missteps to follow. Apple’s somewhat conservative attempt to bring convenient, safe, useful AI to the somewhat skeptical muggles. (And wizards like me. Too old, jaded, and busy to jump on every tech bandwagon.)
 
Last edited:
  • Like
Reactions: Tagbert

cateye

Ars Legatus Legionis
12,221
Moderator
I don't fault them for going overboard, because it's not a choice they're making, it's a choice being enforced upon them by the marketplace. Ignoring AI risks, rightly or wrongly, being labeled as irrelevant. The damage to Apple's brand wouldn't be mortal, but it wouldn't be minor, either. Ask Microsoft how missing the mobile device revolution went for them.

Do I personally find the focused, professional applications of AI (e.g. what Adobe is doing, which I use near daily) a lot more interesting and useful than what most other companies, including Apple, have showcased so far? Yes. But I understand that circumstances present a real challenge for Apple—having to iterate ideas in public, including ones that will go nowhere, rather than slowly evolving them in a skunkworks until they have that Jobsian, biblical sheen as I'm sure they would prefer.

Maybe someday all this will result in a version of Siri that isn't completely braindead. That alone will have made the shotgun approach we're suffering through now worth it.
 

wrylachlan

Ars Legatus Legionis
13,684
Subscriptor
Maybe it's just me, but did anyone wait until they had access to Apple Intelligence to try out language or generative AI features? I can't see it. I'm not saying the integration won't be useful, I'm just saying it doesn't warrant top billing in the value proposition of Macs and iOS devices.

I'd be really interested to hear what you guys are thinking post the "exciting week for the Mac" (I loved the hardware that was announced on the whole)
Within 10 years LLMs will dominate the UX of all devices. Apple Intelligence is nothing more or less than Apple saying “we recognize that this is the future and we’re working towards it.”

I think one of the things that’s getting lost in the discussion of this “staggered rollout” is that Apple has been moving in this direction for years. 5 years ago all features dropped in iOS X.0 and point releases essentially fixed bugs and maybe tuned performance a bit. But for the last few years Apple has been moving towards more staggered feature releases. This isn’t a new phenomenon with the Apple Intelligence features.

Lastly, Apple Intelligence isn’t just the features. It’s the models and the fact that those models are being baked into the OS and made available to third parties. That plumbing piece is the real heavy lift here and I would expect that we would see an acceleration of features that use these models next year. Take the new image recognition engine that powers Photos new search. That’s a system service now which means any app that wants to understand images has access to a dramatically more powerful model than before that’s built right into the OS, is highly performant, and won’t kill your battery. On some level the features we’re getting are just the tip of the iceberg.
 

jaberg

Ars Praefectus
4,114
Subscriptor
Word on the street* is that Apple Intelligence only beat out 1.3X faster at random Excel tasks as the headline feature by a narrow margin. That was certainly going to send my mother off in search of her checkbook.

You know what would sell her a new Mac… (If her current M1 Air wasn’t capable.) Show me the picture of [grandaughter] in last week’s ice show and send it to [sister]. I recognize the privilege, but Mom would pay fully-loaded MacBook Pro/XDR display money — in a heartbeat — if it she didn’t have to understand the computer. Not requiring help doing tasks that are trivial to us here. I won’t be surprised if her otherwise perfectly adequate iPad, and the iPhone SE, get replaced before mine — once a process made simple by AI is unavailable to her on those devices. Unless it’s security based, I don’t push, I facilitate.

Outside of our circle here, performance boosts and 16gb baseline RAM (whoopie) are bullet points. Not headlines.


*I made that up. Nobody talks to me anymore.

I’ve now edited this post multiple times for grammar and punctuation. Where’s my checkbook.
 
Last edited:

OrangeCream

Ars Legatus Legionis
56,641
Did anyone else get slightly irked at the endless references and constant top billing Apple Intelligence got in all 3 product announcements? I understand pushing it marketing-wise, but for one thing, it's like whiplash after they were quite sober on AI and machine learning and then it's apparently the most important thing Macs will be used for?
I think you should separate the hyped up genAI everyone else is focused on and keep an eye on the ML AI Apple has been pursuing since the advent of the Apple Neural Engine.

I actually do think the AI stuff is going to be huge. Right now the state of the industry can be compared to the beginning of the GPU market 25 years ago. It was an optional accelerator for games, mostly. Today it’s become core to many tasks thanks to how visual heavy computers have become, alongside games, video and audio encoding, and now AI. The ANE is a more efficient means to do AI, in the same way a GPU was more efficient than a CPU to perform certain math heavy operations.
Now, maybe I'm missing something, trapped as I am on the rapidly crumbling ex-country known as Britain (meaning I only have access to the non-generative ApIn features as of now), but leaving my bitterness over that aside excepting the innovation in the backend (Private Cloud Compute and all that), I have to say the Apple Intelligence featureset ... well, it's underwhelming.
It’s underwhelming because it’s iterative. It’s not something that will disrupt the market. It’s more of what Apple has been doing for a long time now, except now everyone else is going to do it too.
There's not a single thing that hasn't been offered already, except I guess Genmoji.

That isn’t true. The ability to use an AI model to organize unstructured text into structured information makes it possible to process multiple sources into a reliably (if imperfectly) organized manner.

Take sentiment analysis. It’s an old AI problem where you read a review and try to determine if it’s a good review or a bad review. To tackle the problem you use a training set where reviewers have used a thumbs up or down mechanism, meaning the model can know without parsing the text whether it is good or bad.

You can then use different AI models to extract features from the text. Are certain words more common? Are groupings of words more common? Are the types of words used associated with good reviews? Are these words used in both good and bad reviews? You might even have multiple AI models working in the text. You might transform a review into a multidimensional point in review space then flatten it into three dimensions, one dimension being favorability, another dimension being enthusiasm, and another dimension being reviewer quality.

That model(s) can then be applied to any text. You might apply a LLM to transform a Twitter post into a space in common with the sentiment analysis space and then feed it in. You might get specific phrases that then indicate the sentiment, what the sentiment is, and how strongly the person believes that sentiment, and the type of person the poster is.

You might also apply that model to an email; now you might be able to break down paragraphs into salient points, how important that point is, and how important the person making the point is. Then said summary can be sorted by weighted importance in a daily summary, during a focused notification, or whenever you check your email.

There's also the question of the ethical quagmire that LLMs and generative features grew out of. Has Apple commented on this, on whether they addressed the problematic aspects of the datasets used for training their models?
The subject of LLM is only one part of the whole, and at this point we don’t know how important it is. We already have a public statement regarding their LLM use:
https://huggingface.co/apple/OpenELMWe pretrained OpenELM models using the CoreNet library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.


I missed it if they did, and frankly I'm skeptical if they even could at this point - but if anyone has the resources to actually do this stuff (more) ethically, Apple does.
They also provided the training data:
Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them.

pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242

Rather than all this breathless promising followed by staggered delivery, I would have respected them far more if they had stuck to what seemed like their original strategy of waiting until the AI hypesteria died down - until maybe they could bring a single fresh idea to this frontier besides Genmoji and a more secure way of handling user PI, which is you know, cool but I would have hoped that was the bare minimum for our newly "intelligent" Apple. At least Siri will maybe come close to delivering the experience we were sold on back in 2011, I guess?
The ability to sort, organize, prioritize, and evaluate Freeform text has been part of Apple’s DataDetector strategy for years:
https://developer.apple.com/documentation/datadetectionData detection methods in other frameworks detect common types of data represented in text, and return DataDetection framework classes that provide semantic meaning for matches. Get relevant, domain-specific information from matches of the following types:

  • Calendar events
  • Email addresses
  • Flight numbers
  • Web links
  • Amounts of money, with currencies
  • Phone numbers
  • Postal addresses
  • Shipment tracking numbers
Use functions, such as those in UIPasteboard, to detect the types of data that you specify in a particular context.

Their AI frameworks will just build on top of that, without being privacy invasive.
Maybe it's just me, but did anyone wait until they had access to Apple Intelligence to try out language or generative AI features? I can't see it. I'm not saying the integration won't be useful, I'm just saying it doesn't warrant top billing in the value proposition of Macs and iOS devices.
I think it’s going to make more sense if you just accept that Apple is marketing what they’ve always been doing now that there is a hype train they can take advantage of.
I'd be really interested to hear what you guys are thinking post the "exciting week for the Mac" (I loved the hardware that was announced on the whole)
Their existing ML features include photo library search, text extraction from photos, subject detection in camera mode, text transcription from video and audio, autocomplete and autocorrect, subject extraction from photos, depth and focus analysis of photos, face detection, and face recognition.

But they don’t market it as such. Instead you only know about Smart Albums, Live Text, Photo Enhancement, Live Transcription, Voicemail Transcription, Autocorrect, Siri text input, Stickers, Portrait Mode, Face ID, and Cinematic Video.

That’s what I think will continue to happen here, with their Apple Intelligence. They will bake these models into the OS and just layer the advancements into your favorite apps.
 

GaitherBill

Ars Tribunus Militum
2,254
Subscriptor
I have the Apple Smarts enabled on my 16 Pro Max.

I have no idea what it does for me. I should probably read up on it.

Edit: Oh! this is going to make it so much easier to delete all the pics of my ex wife! I was dreading going through 15 years of photos manually. I should learn how to use all the Apple Tech I own.
 

OrangeCream

Ars Legatus Legionis
56,641
Within 10 years LLMs will dominate the UX of all devices. Apple Intelligence is nothing more or less than Apple saying “we recognize that this is the future and we’re working towards it.”
I’m not sure that is true. I agree that some AI model will be in use to capture user intent and then assist the user; that’s how they use it now when you take a photo, or video, after all. That the model is an LLM isn’t obviously true because it’s also possible something better and different appears in three years.
I think one of the things that’s getting lost in the discussion of this “staggered rollout” is that Apple has been moving in this direction for years. 5 years ago all features dropped in iOS X.0 and point releases essentially fixed bugs and maybe tuned performance a bit. But for the last few years Apple has been moving towards more staggered feature releases. This isn’t a new phenomenon with the Apple Intelligence features.
Agreed.
Lastly, Apple Intelligence isn’t just the features. It’s the models and the fact that those models are being baked into the OS and made available to third parties. That plumbing piece is the real heavy lift here and I would expect that we would see an acceleration of features that use these models next year. Take the new image recognition engine that powers Photos new search. That’s a system service now which means any app that wants to understand images has access to a dramatically more powerful model than before that’s built right into the OS, is highly performant, and won’t kill your battery. On some level the features we’re getting are just the tip of the iceberg.
Yes, and I’m excited to see how they evolve Data Detectors using these models. Right now they only manage text, but you should be able to treat photos and videos the same way too. Extract subject content, sentiment, transcripts, and embedded text and then organize it into a usable format means you can create calendar entries from voicemails, flight itineraries from a webpage or a screenshot, or a recipe from a video of someone making a cake.
 

wrylachlan

Ars Legatus Legionis
13,684
Subscriptor
I’m not sure that is true. I agree that some AI model will be in use to capture user intent and then assist the user; that’s how they use it now when you take a photo, or video, after all. That the model is an LLM isn’t obviously true because it’s also possible something better and different appears in three years.
Will a better model appear in the next few years? Almost certainly. But I’ll be surprised if in 10 years whatever model we use isn’t at least recognizable as an evolutionary descendant of LLMs.
 

OrangeCream

Ars Legatus Legionis
56,641
Will a better model appear in the next few years? Almost certainly. But I’ll be surprised if in 10 years whatever model we use isn’t at least recognizable as an evolutionary descendant of LLMs.
Like I said, it’s also possible that it isn’t like an LLM. The current system seems hopelessly inefficient to me.

I do accept the possibility that an LLM or a derivation of one becomes a standard part of the user space, though.
 

jaberg

Ars Praefectus
4,114
Subscriptor
Edit: Oh! this is going to make it so much easier to delete all the pics of my ex wife! I was dreading going through 15 years of photos manually. I should learn how to use all the Apple Tech I own.
Not to rain on your parade, but Photos has offered face recognition for a bunch of years now. The AI (at least theoretically) adds the ability to delete “photos of my ex wearing a red dress”.
 
  • Like
Reactions: grahamb

VirtualWolf

Ars Legatus Legionis
10,904
Subscriptor++
I’ve started to put the writing tools to work. Mostly for proofreading, but I did use them to “spice up” a short reply on another forum.
Does it output that exact same completely bland, soulless, and overly-verbose style that ChatGPT and the like turn everything into?

Part of my job at work is keeping the documentation up to date and polished for our internal software platform, the other day one of the developers wrote up a document for a new feature that's coming out and asked me to review it. I read it over and it had that absolutely unmistakable style of "This has been through an LLM". I asked if he'd he done that, and sure enough he had.

(To be clear, he didn't get the LLM to write the documentation out of whole cloth, he wrote it and then ran it through some sort of like "Make this more professional" or whatever process that LLMs can do.)
 

ShuggyCoUk

Ars Legatus Legionis
10,173
Subscriptor++
For those who like the existing photo search capabilities (me included) the photos semantic search gets extended to videos, and deep links to the point in the video the item/person/whatever occurs.

disclaimer:I’m non trivially involved in all this so might have to just ignore some things, but public betas are public and I will talk about them.
 

jaberg

Ars Praefectus
4,114
Subscriptor
Does it output that exact same completely bland, soulless, and overly-verbose style that ChatGPT and the like turn everything into?
A valid question. One I'm not sure I have a good answer for yet. As I said above, my use for the feature has primarily been proofreading. (And I don't always have it available — I'm still doing most of my work on my iPads. My AI interventions so far have been to liven up brief clips and forum responses. In part to see if anyone would notice.

Moving over to the Mac so I can invoke Writing Tools…

Proof reading picked up the fact that I'd forgotten to close the parentheses in the paragraph above. No, the mistake wasn't deliberate, but it's typical of what happens when I hit POST immediately.

A valid question. One I'm not sure I have a good answer for yet. As I said above, my use for the feature has primarily been proofreading. (And I don't always have it available — I'm still doing most of my work on my iPads. My AI interventions so far have been to liven up brief clips and forum responses. In part to see if anyone would notice.)

Running the corrected text through the Friendly filter gets me this:

That’s a great question! I’m not sure I have a perfect answer yet. As I mentioned before, I’ve mostly used the feature for proofreading. (And I don’t always have it - I’m still mostly working on my iPads. My AI interventions so far have been to make brief clips and forum responses more interesting. I’m curious to see if anyone notices.)

In this case the edit is a little over-the-top for me. I'm not seeking a perfect answer because I know I'll never be able to provide one. I would have rejected, regenerated or edited the re-write to restore a bit of my Scandinavian joy to the tone.

Let's try professional.

A valid question that I am uncertain about providing a comprehensive answer to at this time. As previously mentioned, my primary application of the feature has been proofreading. (It is important to note that I do not always have the feature readily available, as I continue to primarily conduct my work on my iPads. My AI interventions thus far have been limited to enhancing brief clips and forum responses, primarily to gauge whether anyone would perceive a difference.)

Yep. Definitely middle management speak. Though not always successful, this is exactly what I try not to do when I write. (If an assignment was a minimum of 10,000 words, I submitted 8,000 that did the job — and was seldom penalized for it.) Reject. Or at the very least, a major re-edit before I send it along.

Concise(?):

A valid question. I’m not sure I have a good answer yet. My primary use for the feature has been proofreading. I don’t always have it available (I mostly work on my iPads). My AI interventions so far have been to make brief clips and forum responses more engaging, to see if anyone notices.

Not bad. Accurate. Removed some awkward phrasing and wordiness. The parentheses are better placed. I'd keep it.

Out of curiosity, let's run the "Mr. Bidness" version through the filter:


A valid question I’m unsure how to answer fully. As mentioned, I mainly use the feature for proofreading. (Note that I don’t always have it, as I mostly work on my iPads. My AI interventions have been limited to enhancing brief clips and forum responses to see if anyone notices a difference.)

Better than the long version. If my original was particularly troublesome to me, I might send this off to an appropriate audience. I do struggle with the right level of professionalism for business communication.

My assessment: It's a tool, not a copywriter. It can be used or misused. AI writing is a compression to the middle. If you're a fairly good writer, particularly one with a developed voice, it will push you down the curve. I would put myself towards that end of the spectrum. As with my photography, you might not always like the output, but most of the words are chosen deliberately. However, I will benefit from proofreading and light-handed editing. If you're a poor writer who struggles with word putting togetherness paragraph-wise [Dan Jenkins], or not a native speaker, the tools might improve your product. BTW, I'm citing Jenkins there, not calling him out.

I suspect I'll continue to selectively use the tools, and expect that those tools will improve over time.
 
Last edited:

Bonusround

Ars Tribunus Militum
1,827
Subscriptor
… until maybe they could bring a single fresh idea to this frontier besides Genmoji and a more secure way of handling user PI, which is you know, cool but I would have hoped that was the bare minimum for our newly "intelligent" Apple.
It’s not just cool, it’s really really damn hard. As to the ‘bare minimum’ shade, just look around: is anybody else doing it?

Maybe it's just me, but did anyone wait until they had access to Apple Intelligence to try out language or generative AI features? I can't see it.
Probably not many on this forum. But it’s a reasonable assumption that the vast majority of iPhone users‘ first interactions with generative AI will occur through Apple Intelligence.

‘A computer for the rest of us. A bicycle for the mind.’
 

cateye

Ars Legatus Legionis
12,221
Moderator
Moving over to the Mac so I can invoke Writing Tools…

Great demonstration, and it reminds me a bit of how I approach AI as a graphic designer:

Fully AI-generated images are almost universally not something I would use in a finished piece, for a client, for money. Too plastic-y with a sheen (both literal and figurative) that makes even highly realistic scenes look oddly wrong. Like Hyperrealism as a movement and method in art, it demonstrates tremendous technical skill, will still lacking that difficult to define 0.1% that makes it appear actually real.

But extending a canvas, or removing unnecessary objects, or shifting perspective, or any one of a number of manipulations of an existing real image? Absolute fucking gold and the time these tools have already saved me are career-changing. It's nothing I can't do manually given time, or by throwing at a subcontractor... but why? The machine, under my guidance, does them faster, cheaper, and better.

Likewise, my journalism education makes me blanch at the idea of using AI to write or re-write an entire piece without any review. The "sheen" I spoke of extends to word choice, as demonstrated with the "Professional" writing example. But as a way to evaluate your writing, or catch quantifiable grammar/punctuation mistakes, or to A-B test to make sure you're being concise and staying on-message? I don't see how this won't become SOP over time.
 

Bonusround

Ars Tribunus Militum
1,827
Subscriptor
Like Hyperrealism as a movement and method in art, it demonstrates tremendous technical skill, will still lacking that difficult to define 0.1% that makes it appear actually real.
Apologies for the aside, but your WP link above led me to this clichéd chrome teapot render…

La_hora_del_te.jpg
Magda Torres Gurza, oil on canvas

(wow)
 
Last edited:

Jeff3F

Ars Tribunus Angusticlavius
7,083
Subscriptor++
I'm most interested in when it becomes myself-aware (the contextual awareness stuff, not SkyNet stuff). And then the complex asks that that permits as someone posted above, find that picture of such and such and email it to so and so. This stuff will get better and better and I'm glad they're ostensibly addressing it even though Siri is usually just the worst.

Siri will probably continue to be the worst, but at least it'll be private. I'd love to be wrong though.

If we get into a habit of reading and writing communiques via chatbot AI, then at some point we'll probably get into someplace where we're no longer using written language in the same way. I have a calendar item for a party at a friends house, I can ask the AI to ask their AI if I should bring anything, and I'll get an answer that might or might not have required that friend's personal attention. This is exciting.

What's not exciting is the possibility that some folks will have fewer quality human interactions in their lives.
 

cateye

Ars Legatus Legionis
12,221
Moderator

Isn't it?! Hyperrealism isn't what I gravitate toward at a museum or gallery (that'd be the photography exhibits), but it's fun to linger over and marvel at the unbelievable skill and understanding of light and color.

And yet... it immediately doesn't look real. Our brains are so skilled at spotting even the slightest deviations. It's why that "Professional" text in jaberg's example reads so peculiar, and why we look at a Monet, in contrast, and have such an emotional connection to a scene that doesn't look anything like "real" yet evokes the emotion of place.

Anyway, sidetrack indeed.
 
  • Like
Reactions: jaberg

jaberg

Ars Praefectus
4,114
Subscriptor
There is one artist I've seen, and I'm ashamed to say I can't remember his name, who has a hyperrealistic piece hanging at the Walker. Or at least he had one there within the past year or so. He produces oil paintings that are for all intents and purposes indistinguishable from photography. They have life. Beyond that, I'm with @cateye in that the genre tends to tickle my uncanny valley detector. Neat to see initially. I would never live with it.
 

wrylachlan

Ars Legatus Legionis
13,684
Subscriptor
Isn't it?! Hyperrealism isn't what I gravitate toward at a museum or gallery (that'd be the photography exhibits), but it's fun to linger over and marvel at the unbelievable skill and understanding of light and color.

And yet... it immediately doesn't look real.
I don’t think the issue is the uncanny valley. I think the issue is choice of subject. Choice of subject is 9/10 of art. All of those photorealistic art piece tend to chose utterly banal subjects that are just designed to show off technical prowess. I mean a fucking chrome tea set? There’s precisely nothing evocative about that subject.
 

cateye

Ars Legatus Legionis
12,221
Moderator
Choice of subject is 9/10 of art.

This sidetrack has applicability! Let's go back to jaberg's samples: What makes the "Professional" example so weird is it reads entirely different to context and to the writing I expect from jaberg. Who writes on a message board like that? It isn't that Apple Intelligence is bad at re-working text to fit a certain model, it's that the real intelligence, which no AI can do for us, is knowing which to choose, and when, and how much of the reworking makes sense. That's why the wild west of AI in general right now is painful: People have been given a toy and are pushing the big button on that toy over and over and over again without regard to what happens as a result.

Presumably, we might expect Apple to try and bridge that disconnect, to offer AI tools that shine not from how widely they can be applied, but how focused that application is. The first set in 18.1 doesn't feel any more dialed in than anyone else's tools, for the most part, but I'm curious to see how Apple evolves these tools moving forward.
 

gabemaroz

Ars Tribunus Militum
1,689
I'm not sure AI will ever be able to create anything truly unique, no matter how advanced it gets, because nuance, flaws, and experience are always going to be part of what people create. Probably the only thing Apple has brought so far is brakes. They have to jump on the bandwagon because there's so much impetus and capital behind it but haltingly so.

It's called AI slop because the stuff we see now is a bland amalgamation of its components. A delicious meal run through a blender and served lukewarm. Written, visual, audio Soylent Green – it's people ... and unappealing.
 

OrangeCream

Ars Legatus Legionis
56,641
I'm not sure AI will ever be able to create anything truly unique, no matter how advanced it gets, because nuance, flaws, and experience are always going to be part of what people create.
That has nothing to do with Apple Intelligence.

Your statement is like saying keyboards will never be able to create anything unique. They aren’t meant to, they’re just there to allow you to send inputs to your computer.

Likewise AI is a tool to be used. You tell it what to do, it doesn’t do anything by itself.
Probably the only thing Apple has brought so far is brakes. They have to jump on the bandwagon because there's so much impetus and capital behind it but haltingly so.
Other than the fact that they’ve been using AI for years now before just about anyone else. AMD didn’t and Intel did add an NPU until this year while Apple has been shipping an NPU with all new devices since the A11 in 2017. They’re not jumping on the bandwagon so much as taking advantage of free advertising.
It's called AI slop because the stuff we see now is a bland amalgamation of its components. A delicious meal run through a blender and served lukewarm. Written, visual, audio Soylent Green – it's people ... and unappealing.
Yeah, except that’s not what Apple is doing. They advertise using it to summarize and organize your email and notifications, for example.
 

Louis XVI

Ars Legatus Legionis
11,138
Subscriptor
A valid question that I am uncertain about providing a comprehensive answer to at this time. As previously mentioned, my primary application of the feature has been proofreading. (It is important to note that I do not always have the feature readily available, as I continue to primarily conduct my work on my iPads. My AI interventions thus far have been limited to enhancing brief clips and forum responses, primarily to gauge whether anyone would perceive a difference.)
Wow, that’s horrifying! As a professional, I spent years trying to beat that kind of writing out of young lawyers and teachers. Simple and direct is the most effective style of professional writing.
 

jaberg

Ars Praefectus
4,114
Subscriptor
Wow, that’s horrifying
So horrifying in fact that I wish you’d provided context for the citation. 😂 😂 I reported it. Did I actually say (or write) it?

No sweat. It’s fine. Even if this quote comes around to bite me in the 🫏 I’ve committed worse literary crimes. May the beatings continue! 😉

🫂
Your brother in clear, concise communication.
 
  • Like
Reactions: Jeff3F

Louis XVI

Ars Legatus Legionis
11,138
Subscriptor
So horrifying in fact that I wish you’d provided context for the citation. 😂 😂 I reported it. Did I actually say (or write) it?

No sweat. It’s fine. Even if this quote comes around to bite me in the 🫏 I’ve committed worse literary crimes. May the beatings continue! 😉

🫂
Your brother in clear, concise communication.
Sorry, I thought it would be clear from the context. I certainly didn’t mean to imply that you wrote that nonsense yourself!
 
  • Like
Reactions: jaberg

wrylachlan

Ars Legatus Legionis
13,684
Subscriptor
Presumably, we might expect Apple to try and bridge that disconnect, to offer AI tools that shine not from how widely they can be applied, but how focused that application is. The first set in 18.1 doesn't feel any more dialed in than anyone else's tools, for the most part, but I'm curious to see how Apple evolves these tools moving forward.
I mean they don’t even have the very first personalization pieces in place yet. Of course it’s generic now. But this is just the start of a decades long process. I’d be shocked if we didn’t get to a place where the AI can distinguish your Ars posting style from your email to grandma style and give you the best possible version of your context specific style.

It’s like the new handwriting tools in iPadOS. They’re fucking uncannily good at making handwriting that looks like my chicken scratch just a best version of my chicken scratch. That will come to writing tone sooner rather than later.
 

cateye

Ars Legatus Legionis
12,221
Moderator
Yeah, about that: I’m one of those all-caps, printing-only hand writers and iPadOS definitely doesn’t know what to do with my writing: Convert to text is also in all caps, the handwriting “smoothing” barely works at all, changing letters to random blobs, etc. Which is fine, I get that I’m a bit of an outlier, but that’s the sort of personalization and individual context I hope they can eventually sort out.
 
  • Like
Reactions: japtor

dal20402

Ars Tribunus Angusticlavius
7,334
Subscriptor++
I feel completely unequipped to judge how the addition of AI tools to everyday computing is going to shake out, because I just don't feel like my brain and that of the average user are on the same wavelength.

I want computers to be consistent and predictable above all, not to try unpredictably to guess what I want. When I use ChatGPT (or the similar proprietary tool that my employer has put together to avoid disclosing confidential information outside the company), I'm usually after a signpost to point me the right direction in a domain I don't know, rather than either a final answer to a question or a written product to give someone else. LLMs to date have been comically bad at expressing things on my behalf and I find it hard to see how that could change.

Having said all of that, "LLMs dominating the UX of all devices" sounds to me like dystopian hell, and if Apple goes that direction I will either stick with the old stuff or find a different platform.
 

Bonusround

Ars Tribunus Militum
1,827
Subscriptor
Having said all of that, "LLMs dominating the UX of all devices" sounds to me like dystopian hell, and if Apple goes that direction I will either stick with the old stuff or find a different platform.

I feel you in that regard, and think a better way to express the notion might be “LLMs playing an underlying role in the majority of a device’s UX.”

Dominating is too strong a word and, I think, misrepresents the surfaced UX that we’re likely to see from Apple’s personal AI – in the near to medium term at least.
 

Felix K

Ars Tribunus Militum
2,383
You can then use different AI models to extract features from the text. Are certain words more common? Are groupings of words more common? Are the types of words used associated with good reviews? Are these words used in both good and bad reviews? You might even have multiple AI models working in the text. You might transform a review into a multidimensional point in review space then flatten it into three dimensions, one dimension being favorability, another dimension being enthusiasm, and another dimension being reviewer quality.

That model(s) can then be applied to any text. You might apply a LLM to transform a Twitter post into a space in common with the sentiment analysis space and then feed it in. You might get specific phrases that then indicate the sentiment, what the sentiment is, and how strongly the person believes that sentiment, and the type of person the poster is.

You might also apply that model to an email; now you might be able to break down paragraphs into salient points, how important that point is, and how important the person making the point is. Then said summary can be sorted by weighted importance in a daily summary, during a focused notification, or whenever you check your email.
No one acknowledged this but your post is a brilliant bit of text on why AI is going to revolutionize interpretation of data.

I think it’s the same kind of revolution that statistical probabilities brought to physics (matrices).
It will allow us to understand the universe in a different dimensions

Thanks for summing it up in a beautiful way. I will shamelessly steal it
 

gabemaroz

Ars Tribunus Militum
1,689
All of that stuff has been available for years though, long before the current crop of LLMs.

There have been sentiment analysis libraries in python, such as Text Blob, VADER, Bag of Words, LSTM, etc. for at least a decade.

I would even say the chatbot form of LLMs would be terrible for sentiment analysis because they are stochastic and prone to hallucination.
 
Last edited:

daGUY

Ars Tribunus Militum
2,915
Really weird bug I’m seeing with notification summaries: for Reminders, the summary is “stuck” displaying an old summary of previous reminders. Clicking the summary to expand it into the individual notifications shows the correct/new ones, but the summary still summarizes the old ones. And it’s been like this for weeks! All other notification summarizes work properly, but the ones for Reminders specifically won’t update. I’m on Sequoia 15.1.1.
 
  • Like
Reactions: megabulk

Hap

Ars Legatus Legionis
11,127
Subscriptor++
I've been running all the betas. So far, Apple Intelligence as been more of a pain than any help whatsoever.

  • Rewriting is just not clear and effective
  • Summaries are really hot garbage
  • Siri is still REALLY bad at understanding what you want. Last night for example:

"Siri, set me a reminder at 9am tomorrow to call xxx about yyy"
"ok, your alarm is set for 9am"
"Siri, please cancel my 9am alarm"
" Ok, I found 3 alarms, none on, for 6am, 7am, and 9am, which do you want to cancel?"
"9am"
" Ok, I found 3 alarms, none on, for 6am, 7am, and 9am, which do you want to cancel?"
"9am"
" Ok, I found 3 alarms, none on, for 6am, 7am, and 9am, which do you want to cancel?"
"Fuck, Siri cancel all my alarms"
" Ok, I found 3 alarms, none on, for 6am, 7am, and 9am, which do you want to cancel?"
"Siri - shut the hell up"

blessed silence. I think sat at my computer and did the reminder manually.

Running latest Release Candidate.