Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been using chat GPT to control my light colours for about a month now. It's too tedious to properly set the colours and temperatures of our lights manually and too complex to consider all factors like activity, weather, music, time of day and season.

Chat GPT is now our personal lighting DJ, giving us dynamic and interesting light combinations that respect our circadian rhythm.

Here's my prompt - the output of which feeds Home Assistant:

Set the hue for my home's lights using the HSL/HSB scale from 0-360 by providing a primary and complementary colour which considers the current situation. The HSL color spectrum ranges from 0 (red), 120 (green), to 240 (blue) and back to 360 (red). Lower values (0-60) represent warmer colors, while higher values (180-240) represent cooler colors. Middle values (60-180) are neutral.

Consider these factors in setting the primary hue (in order of importance):

1. Preferences throughout the day: - When about to wake: Reds, oranges or hot pinks - Approaching bedtime: Hot pinks or reds - During worktime: Blues, greens or yellows - Other times: Greens, yellows or oranges

2. Current activity: Bedtime

3. Sleep schedule: Bedtime 23:00, Wake-up time 07:00

4. Date & time: Sunday May 21, 05:40

5. Current primary hue: 10

6. Current complementary hue: 190

7. Weather: 13°C, wind speed 9 km/h, autumn

Respond in this format and provide a reason in <250 characters:

{"primary_hue": PRIMARY_HUE, "complementary_hue: COMPLEMENTARY_HUE, "reason": REASON }

The output looks like this:

{primary_hue: 10 complementary_hue: 190 reason: "Approaching bedtime and early hours of morning, so a warm and calming hue is needed. Complementary hue adjusted slightly to 195 to maintain balance."}



FWIW, I just tried the same prompt with open assistant

> {"primary_hue": 345, "complementary_hue": 55, "reason": "For bedtime, warm orange tones at 345 create relaxation while paired with cool green at 55 helps prepare your body for sleep."}

You can absolutely replace this with something self hosted, this was using the `oasst-sft-6-llama-30b` model on https://open-assistant.io


Thanks! This is a great find... I'll have a look. Fingers crossed my hardware can run this.


For stuff like this I like to make it write out a lot of "reasoning" before the final output that I'll parse.

Like so:

Write three thoughts on how the primary and complementary hue should change and what value they should change to, along with your reasoning.

Pick one, summarize the reason for it in less than 50 words.

Then write FINAL CHOICE: followed by output that looks like this {"primary_hue": PRIMARY_HUE, "complementary_hue": COMPLEMENTARY_HUE, "reason": REASON"}


> ...provide a reason in <250 characters

For what it's worth, I've been using something similar with my prompts and felt the completions did a poor job of honoring this, but do a better job when asked to use words instead of characters.


> For what it's worth, I've been using something similar with my prompts and felt the completions did a poor job of honoring this, but do a better job when asked to use words instead of characters.

Yes, restricting by characters is hard for GPT-style LLMs because they work in tokens, not characters.


Thanks, that's a good pickup -I'll refine my prompt.


But LLM have little concept of tokens don't they? Or at least well not know what their tokenizes is like.


It can understand word boundaries, though. A space is its own token and there are special tokens for common words prefixed with a space or common word prefixes with a space in front, ex “ a”


Imagine asking a person to give a verbal response in 250 characters or less. They could do it, but it would be a lot of work. Even saying less than 50 words is hard.

If you actually have a hard cap, you’ll have to give feedback. If it’s just you don’t want an essay, it works great to say something like “a few sentences”. And as always, examples help a ton.


GPT-4 does a much better job at paying attention to details in prompts


A video of these conditions and the resulting colors would be nice.

I wonder if the limited amount of variables mean one could just ask ChatGPT to generate a one-time lookup table of colors and store them locally. But it's interesting to see that an LLM can be a "color designer".


I like the idea of storing the colours locally - better for privacy too! Definitely a novelty factor that AI is "designing" my apartment's colours.

Here's a snapshot of how it works in my office - bedroom and living room have similar setups and draw from the same colours:

https://ibb.co/ysxY3Lf https://ibb.co/9VgYVTb

And details on the setup in Node Red/Home Assistant: https://news.ycombinator.com/item?id=36018291

Prompt:


I realize now, instead of a lookup table, ChatGPT could probably generate a piece of code that considers those inputs (so e.g. temperature, time of day/weekday, cloud cover); and output colors.


Brilliant idea. I hadn't thought of that, but it would save me some API quota and give me a little more control over the outcome.

I might struggle matching colours to the music playing a little, but there are so many other factors that it probably doesn't matter.


That's fantastic. I might have to hook up my lights.

I also keep complaining that music recommendations fail because they don't take into account enough factors like this - would be great to control music choices this way too


Could you elaborate more on the setup? Like if I were a technologically competent person, but unfamiliar with how to set up a system that keep GPT live, and then feeds that output into Home Assistant


Certainly, I've just shared this in another comment here:

https://news.ycombinator.com/item?id=36018291

The flow is pretty simple in Node Red - render the prompt from Home Assistant and pass it to Chat GPT. The response from Chat GPT is parsed and sent back to Home Assistant in some "Helper" variables known as "input_number" and "input_text".

Once the values are in Home Assistant, it's pretty easy to change the colours of lights in an automation.


That's really cool. Are you using the API to run this locally?


Yes, I access the API through Node Red which fills out the prompt template and returns Chat GPT's output to Home Assistant.

Costs about $3/year in API quota.


Does ChatGPT really work better in this case than a bunch of if statements and a random number generator?


Haha, probably not. The logic for my activities and the complexity for the complimentary colours made it a little tough to maintain and extend. Plus there's a nice warming feeling knowing an LLM is busy designing my lighting throughout the day.


No, we have come full circle with added layers of complexity.


I don't get it. Do you not tell it the current primary and complementary hues? Seems like it gives you the same back. And in the reason it incorrectly says it adjusted the hue.


It hallucinates a bit, yes. But in general it gives me suitable colours for when I need them. I've yet to try this with GPT-4 or some of the other models suggested in the comments here.


This is awesome. How do you feed this into home assistant?


Thanks - The flow is pretty simple:

1. Every hour a Node Red flow runs 2. It generates a prompt using a Home Assistant Jinja template 3. The prompt is sent to Open AI 4. The response gets parsed from JSON and sent to Home Assistant's "input_number" entities 5. Lights in Home assistant pick up the state change and set the new colours

If you're keen, I can share the Node Red flow diagram (or JSON).


This is super cool, I would love to implement something like this. Are you willing to post the node red yaml for it?


Certainly - just posted it in this comment here: https://news.ycombinator.com/item?id=36018291

Happy to answer any questions if you hit a snag.


[flagged]


You've been down voted but I totally agree. Why not us the same amount of effort to trying and make a meaningful difference in the world?


I guess my comment was also in reaction to the OP which sounded so absurd to me I almost thought it was parody. Af root, my response, upon further reflection, is that if all these gadgets cause so much consternation, maybe they are more trouble than they are worth? Less is more perhaps!


My reaction was also rooted in my own experience of being so crushed down by the day-to-day responsibilities involved in keeping the kiddo on the right path, going to work, keeping house, that I can’t imagine having time to AI automate my mood lights.


Where do you rank the general idea of recreation on this scale? Should we be spending 16 hours a day on improving the world? No time off ever? Should we take amphetamines and maybe get by with 4 hours of sleep instead of 8 so that each day brings 4 more hours of bettering the world to the table?


So I suppose you don’t do anything to provide any kind of comfort or enjoyment to you or your family, ever? If yours, how exactly does that differ from someone working on their home automation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: