Hacker Newsnew | past | comments | ask | show | jobs | submit | davidb_'s commentslogin

Great list that sparked my memory. The tmux scroll back issue was very frustrating for me, and ultimately what made me stop using it. What got me hooked on trying to use it - I do remember being amazed I could close my laptop, go home, and just reopen my laptop and immediately start typing away on my remote terminal without issue most of the time.

A few other issues I do remember running into (probably 10 years ago?):

1. Because it was udp, sometimes a captive portal, work, or hotel firewall blocked it.

2. It was also not installed by default anywhere, so sometimes sysadmins at work were unwilling to install it for me.

3. I sometimes ran into issues with color handling and utf8. I honestly don't remember if it was mosh or my terminal, though. It seemed to happen more often when I used mosh, but I could be misremembering.


It is an advanced driving assistance system. It kind of drives your car for you (lane centering, basic obstacle avoidance, and adaptive cruise control all without you have to touch the wheel or the pedals) as long as you’re looking forward and paying attention to the road.

They have demonstrated full self driving capabilities with a car driving “itself” to Taco Bell. I have a comma3 and have never had much success with that feature. The car drove itself very slowly and seems to just weirdly creep through stop signs. I think the last time I tested that was over a year and a half ago, so it may have improved.

I use mine only on the highways. I noticed for long trips (6+ hours), I can drive longer distances in one go and not feel as fatigued when I reach my destination. As an example, a 10 hour trip to visit family (11-12 hours including stops) I can do by myself in one day with the comma device instead of stopping halfway, or splitting driving time with someone else. For shorter trips (3-6 hours), I arrive to my destination with more energy than when I drive without these features. I am also able to focus more on potential obstacles further down the road than without it.

I think my device has already paid for itself thanks to a couple year period where I had to do that 12 hour trip I mentioned a couple times per month. Plus it is a really nice dash cam.


Are you tuning out closer concerns like lane keeping or smaller objects?

How can you be confident the system is at least as reliable with the concerns you are less focused on?


> How can you be confident the system is at least as reliable with the concerns you are less focused on?

That's my current heartache with my Comma: it does a stunningly shitty job about decelerating into brake lights ahead, choosing rather to keep accelerating (or I guess keeping speed) and then slamming on the brakes as it gets a few feet from the car. OT1H, it's never actually put me in danger, OTOH I don't want "next time" to be the bad luck

Not only does that make me super nervous, it's also a rear-ending risk (since the poor Comma can't see what's behind me)

I haven't worked up the nerve to build and flash one of the 18 quadrillion forks onto my Comma; I've heard some of them are better, but that some word is doing a lot of work


> They have demonstrated full self driving capabilities with a car driving "itself" to Taco Bell. I have a comma3 and have never had much success

I'm surprised you've even had any success. Are you a Comma Prime subscriber or something? Because mine absolutely gives no shits about red (or yellow!) lights, stop signs, "danger, sharp curve ahead," nothing. If it's the open road, lucky me. If there's the slightest decision to make, best to disengage


If you're considering a similar pattern with Flutter rather than ReactNative, they call it "add to app" and there's a couple good talks on how others have approached this [1], [2] from the recent FlutterConUSA, as well as a couple articles that include details and case studies [3], [4]

I haven't tried this myself with a large project (just small examples as proof-of-concepts), but the approach seems very sound. One thing I liked is once you have the legacy app shell figured out, it's not a crazy approach to mock out the bridge/native services and run the app in just flutter (or react native) for development/testing acceleration, then adding final integration testing/QA with the full legacy app shell. I've seen some odd behaviors from apps that have used this approach that I would have to imagine can be serious headaches to debug. That said, it does seem the approach pays off long-term.

There's not much published online about it, but I believe Headspace has used this approach for its mobile app. See [5]

[1] https://www.droidcon.com/2024/10/17/flutter-add-to-app-the-g...

[2] https://www.droidcon.com/2024/10/17/successful-flutter-re-pl...

[3] https://docs.flutter.dev/add-to-app

[4] https://leancode.co/blog/flutter-add-to-app-overview-and-cha...

[5] https://www.nearcoast.com/headspaces-leap-to-flutter-a-game-...


I had the same question. He says TestFlight, but he's handwaving some of the headaches of that. TF builds expire every 90 days.

The other option would be an adhoc certificate, but then you have to collect everyone's apple id.

Apple makes this kind of app distribution process more painful than it needs to be.


It should hopefully be easier in the future when side loading is possible on iOS.


Sideloading is still blocked in the US in their new releases. Only allowed where mandated (EU)


With four users that’s not as onerous as it sounds.


Are you getting useful content out of the 7B model? It goes off the rails way too often for me to find it useful.


You might want to tune the sampler. For example, set it to a lower temperature. Also, the 4bit RTN quantisation seems to be messing up the model. Perhaps, the GPTQ quantisation will be much better.


Use `--top_p 2 --top_k 40 --repeat_penalty 1.176 --temp 0.7` with llama.cpp


Not bad with these settings:

    ./main -m ./models/7B/ggml-model-q4_0.bin \
    --top_p 2 --top_k 40 \
    --repeat_penalty 1.176 \
    --temp 0.7 
    -p 'async fn download_url(url: &str)'


    async fn download_url(url: &str) -> io::Result<String> {
      let url = URL(string_value=url);
      if let Some(err) = url.verify() {} // nope, just skip the downloading part
      else match err == None {  // works now
        true => Ok(String::from(match url.open("get")?{
            |res| res.ok().expect_str(&url)?,
            |err: io::Error| Err(io::ErrorKind(uint16_t::MAX as u8))),
            false => Err(io::Error


lol,

    ./main -m ./models/7B/ggml-model-q4_0.bin \
    --top_p 2 --top_k 40 \
    --repeat_penalty 1.176 \
    --temp 0.7
    -p 'To seduce a woman, you first have to'
output:

    import numpy as np
    from scipy.linalg import norm, LinAlgError
    np.random.seed(10)
    x = -2\*norm(LinAlgError())[0]  # error message is too long for command line use
    print x [end of text]


What fork are you using?

repeat_penalty is not an option.



It's a new feature :) Pull latest from master.


Have you tried using the original repo?


Agreed with the comments on the demo video music. It was startling.

I'm pretty skeptical still as all your demo has shown is similar to what a boilerplate tool might do, but with more unknowns. You have to start somewhere, though, and I think you're on a decent path.

I've been experimenting with chatgpt to do similar workflows to what you're describing for your vision of Second. From my experiments, it seems your demo may be approaching close to the current limits of gpt models. Fine-tune training on my existing code base and requirements may help, but I still feel like we're a very long way away from your vision for Second. Maybe even far enough away that it won't ever be realized.

Still, you've already built a much better workflow than my experiments have yielded so far, and I think it is a very exciting proposition.

Feel free to disregard, but I think I'd approach the business differently than you have - raise your prices drastically and instead of a dev-facing tool to start, make your offering more like an agency. Hire devs and designers to build the client work and sales people to find more deals. Have devs and designers use your tool, providing feedback. Restrict modes of communication to AI-parseable docs between the client and devs/designers. Collect data as you iterate through as many projects as you can find and staff to deliver. Then train/fine-tune LLMs on that whole dataset. Iterate on your tool throughout this whole process. Even better if you could embed yourself/your tool into existing agencies and dev shop firms, but that seems like it could be difficult to navigate.


Thanks! I'll get the next video audio fixed. Agreed! Have to start somewhere. As the Second bots get more sophisticated, I think they will pull away more and more from what you get today with traditional starter kits and boilerplates. Most strikingly, you can use Second with existing web applications! Can't do that with boilerplates.

About agencies - yea some of my first customers are agencies, and the value prop is pretty great for them. Yea, pricing is super hard.


Jesus just crank it down by about 10 dB and re-upload the exact same video. The music while generic and relatively bland isn't so much an issue as is the fact that it's practically deafening.


Any chance you have a source for the bicycle tire story? I'm keen to read more about it (and similar stories).


There are several stories like this at the Stasi Museum[0]. Highly recommend checking it out if you are ever in Berlin. The Jugend Widerstands (Youth Resistance) Museum[1] has some interesting stories about life in the DDR too.

[0] https://stasimuseum.de/

[1] https://widerstandsmuseum.de/


The East German Police after WWII had a policy of Zersetzung, their type of petty, persistant psychological harassment that disrupted people's lives.

I'm curious, did they stop teaching in school that Zersetzung happened in East Germany in the context of WWII? I was taught in High School about that.


I was educated in the US and took AP US History in high school. I don't recall a specific reference to Zersetzung within that curriculum, but the idea that life in East Germany was difficult due to Stasi oppression was covered. I don't recall specific examples like the story I asked about though (besides my own extracurricular references from movies/books).


Thanks for the complete answer! It doesn't actually answer what I wanted to know, but it does answer exactly what I asked.


Did a quick search and didn't find it. All I remember is it was written up in a rather major publication some years ago talking about the fall of Stasi.

While trying to find my source, I did find out Putin was in East Germany when the wall fell, and was working for KGB. Along with Stasi, Putin burned so much evidence of their wrongdoing, it broke the furnace.

So I guess Putin was a questionable fellow even back in the 1980's and 1990's!


> IBM, Bell Labs, GE; who else should be on that list?

Intel felt quite a bit like IBM during my time there circa 2010.

I wasn't familiar with this quote, but wow was that my experience at large tech orgs. I wish I would have known this about 15 years ago, as perhaps I would have done a better job picking orgs to work at early in my career instead of being so frustrated.


Not directly relevant, but I've had all sorts of issues, similar in nature, with other companies that use Zendesk and integrations with Zendesk. It seems surprisingly bad given its popularity.


Combining this with Motion Amplification/Video Magnification [1] could result in some very interesting visuals and applications for factory equipment.

[1] https://www.youtube.com/watch?v=rEoc0YoALt0 Explainer Youtube video about Motion Amplification


Interesting. I'm casually familiar with Video Amplification (the approach at SIGGRAPH a decade ago IIRC), but have never implemented it myself. A really cool result, using the changes in the phase of the basis vectors over time to infer motion, without having to do dense optic flow.

I'm curious how you would combine acoustic localization in 3 space with motion amplification. I unreservedly agree that they are both "super cool", but don't see how they tie together to make something greater than the sum of their parts.

The only thing I thought of is, if two data channels (video, audio) are registered accurately enough, one could maybe combine the spatially limited frequency information from both channels for higher accuracy?

For example: voxel 10,10,10 is determined (by the audio system) to have a high amount of coherent sound with a fundamental frequency of 2khz. Can that 2khz + 10,10,10 be passed to the video system to do something.... cool? useful? If we know that sound of a certain spectral profile is coming from a specific region, is it useful to amplify (or deaden) video motion with a same frequency?


I don't suppose you have any idea if there are publicly available motion amplification tools, yet?


A starting point for the MIT research in question can be found here https://people.csail.mit.edu/mrub/vidmag/


The authors of the predecessor method released some of their code:

https://people.csail.mit.edu/mrub/vidmag/#code


Motion and color amplification from wu et Al are underused in my opinion. Maybe because under patent?


Patents will expire starting from 2035 up to 2040 depending on the method used.


Thus another surveillance tool is born.


It has been rumored that the US military has heartbeat sensors (aka real-life minimap) for decades now, would this really be a new one?


You can pick up heartbeat and breathing rate (if they are relatively still) with simple cw radar, i did it with a $10 HB100 module and audacity.

I bet if you really worked on tuning and filtering you might even be able to pick up the vibration of a persons throat to hear what they say.


Then put a gun on it and you have something even worse!


That is a great video.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: