The new features in 18.2 GA release are a total shit-show. I've never seen Apple software this rushed out the door before, probably to beat the end-of-year shutdown.
Image Playground is full of so many bugs and weird UX patterns.
- The app is called "Playgrounds". Weird, especially considering Apple already has an app called "Playgrounds" (full name: Swift Playgrounds). Fine, weird, and especially weird if you put yourself in the shoes of someone who has no idea what AI is or what Apple is building.
- When you launch it, it has to download models, a process which took over an hour on my internet connection, but it gives you no indication the models are downloading except one informational popup when you first launch the app; nor the download's progress.
- Instead, I was placed into the full app experience, where I was allowed to try to generate images. The generations would fail with an error.
- However, the error is totally unreadable because its presented as a toast at the top of the screen, rendering underneath the dynamic island. All I could see was the first few and last few characters.
- I've heard confusion from two friends (techy-bubble of course) on "why can I only generate variations on these four friends" Its because the list of people you can generate images of seems to come from the list of facial recognition targets in Apple Photos. Yup. None of my friends who have tried this pay for Apple Photos, so their photos aren't being backed up, their list of generation targets are only those in recently taken images.
- You can upload arbitrary images to the model. But its... well, it doesn't do anything predictable. A picture of a bookshelf imagegens a piano. A picture of a person complains with an error clipping the dynamic island "choose a photo with the face more in view". A picture of a black SUV imagegens a yellow jeep. The immediate sense I got is that they're feeding the uploaded image into a "describe this image" LLM, summarizing it, then feeding that back into the imagegen model.
- There is one additional option in this menu, labeled "Appearance". I dare you to click that, put yourself in the shoes of someone who doesn't browse HackerNews every day, and try to understand why that's there and what it does. I think its presenting a way to generate a generic AI generated human, without a source real image? You get the choice of skin tone, and then some kind of ever-changing collage of Vibes of the person you want generated? I can't even explain what's going on because even I'm confused, its incomprehensible.
- The share sheet breaks like 20% of the time. On one occasion this break crashed the app with a popup that displayed a stack trace.
- We were told we'd be able to generate images in three unique styles: Sketch, Animation, and Illustration. Only two of these are available (Sketch is absent) [1].
Its really bad, even when it works the images it generates are pretty trash.
Image Playground is full of so many bugs and weird UX patterns.
- The app is called "Playgrounds". Weird, especially considering Apple already has an app called "Playgrounds" (full name: Swift Playgrounds). Fine, weird, and especially weird if you put yourself in the shoes of someone who has no idea what AI is or what Apple is building.
- When you launch it, it has to download models, a process which took over an hour on my internet connection, but it gives you no indication the models are downloading except one informational popup when you first launch the app; nor the download's progress.
- Instead, I was placed into the full app experience, where I was allowed to try to generate images. The generations would fail with an error.
- However, the error is totally unreadable because its presented as a toast at the top of the screen, rendering underneath the dynamic island. All I could see was the first few and last few characters.
- I've heard confusion from two friends (techy-bubble of course) on "why can I only generate variations on these four friends" Its because the list of people you can generate images of seems to come from the list of facial recognition targets in Apple Photos. Yup. None of my friends who have tried this pay for Apple Photos, so their photos aren't being backed up, their list of generation targets are only those in recently taken images.
- You can upload arbitrary images to the model. But its... well, it doesn't do anything predictable. A picture of a bookshelf imagegens a piano. A picture of a person complains with an error clipping the dynamic island "choose a photo with the face more in view". A picture of a black SUV imagegens a yellow jeep. The immediate sense I got is that they're feeding the uploaded image into a "describe this image" LLM, summarizing it, then feeding that back into the imagegen model.
- There is one additional option in this menu, labeled "Appearance". I dare you to click that, put yourself in the shoes of someone who doesn't browse HackerNews every day, and try to understand why that's there and what it does. I think its presenting a way to generate a generic AI generated human, without a source real image? You get the choice of skin tone, and then some kind of ever-changing collage of Vibes of the person you want generated? I can't even explain what's going on because even I'm confused, its incomprehensible.
- The share sheet breaks like 20% of the time. On one occasion this break crashed the app with a popup that displayed a stack trace.
- We were told we'd be able to generate images in three unique styles: Sketch, Animation, and Illustration. Only two of these are available (Sketch is absent) [1].
Its really bad, even when it works the images it generates are pretty trash.
[1] https://youtu.be/RXeOiIDNNek?t=4180