Great list that sparked my memory. The tmux scroll back issue was very frustrating for me, and ultimately what made me stop using it. What got me hooked on trying to use it - I do remember being amazed I could close my laptop, go home, and just reopen my laptop and immediately start typing away on my remote terminal without issue most of the time.
A few other issues I do remember running into (probably 10 years ago?):
1. Because it was udp, sometimes a captive portal, work, or hotel firewall blocked it.
2. It was also not installed by default anywhere, so sometimes sysadmins at work were unwilling to install it for me.
3. I sometimes ran into issues with color handling and utf8. I honestly don't remember if it was mosh or my terminal, though. It seemed to happen more often when I used mosh, but I could be misremembering.
It is an advanced driving assistance system. It kind of drives your car for you (lane centering, basic obstacle avoidance, and adaptive cruise control all without you have to touch the wheel or the pedals) as long as you’re looking forward and paying attention to the road.
They have demonstrated full self driving capabilities with a car driving “itself” to Taco Bell. I have a comma3 and have never had much success with that feature. The car drove itself very slowly and seems to just weirdly creep through stop signs. I think the last time I tested that was over a year and a half ago, so it may have improved.
I use mine only on the highways. I noticed for long trips (6+ hours), I can drive longer distances in one go and not feel as fatigued when I reach my destination. As an example, a 10 hour trip to visit family (11-12 hours including stops) I can do by myself in one day with the comma device instead of stopping halfway, or splitting driving time with someone else. For shorter trips (3-6 hours), I arrive to my destination with more energy than when I drive without these features. I am also able to focus more on potential obstacles further down the road than without it.
I think my device has already paid for itself thanks to a couple year period where I had to do that 12 hour trip I mentioned a couple times per month. Plus it is a really nice dash cam.
> How can you be confident the system is at least as reliable with the concerns you are less focused on?
That's my current heartache with my Comma: it does a stunningly shitty job about decelerating into brake lights ahead, choosing rather to keep accelerating (or I guess keeping speed) and then slamming on the brakes as it gets a few feet from the car. OT1H, it's never actually put me in danger, OTOH I don't want "next time" to be the bad luck
Not only does that make me super nervous, it's also a rear-ending risk (since the poor Comma can't see what's behind me)
I haven't worked up the nerve to build and flash one of the 18 quadrillion forks onto my Comma; I've heard some of them are better, but that some word is doing a lot of work
> They have demonstrated full self driving capabilities with a car driving "itself" to Taco Bell. I have a comma3 and have never had much success
I'm surprised you've even had any success. Are you a Comma Prime subscriber or something? Because mine absolutely gives no shits about red (or yellow!) lights, stop signs, "danger, sharp curve ahead," nothing. If it's the open road, lucky me. If there's the slightest decision to make, best to disengage
If you're considering a similar pattern with Flutter rather than ReactNative, they call it "add to app" and there's a couple good talks on how others have approached this [1], [2] from the recent FlutterConUSA, as well as a couple articles that include details and case studies [3], [4]
I haven't tried this myself with a large project (just small examples as proof-of-concepts), but the approach seems very sound. One thing I liked is once you have the legacy app shell figured out, it's not a crazy approach to mock out the bridge/native services and run the app in just flutter (or react native) for development/testing acceleration, then adding final integration testing/QA with the full legacy app shell. I've seen some odd behaviors from apps that have used this approach that I would have to imagine can be serious headaches to debug. That said, it does seem the approach pays off long-term.
There's not much published online about it, but I believe Headspace has used this approach for its mobile app. See [5]
You might want to tune the sampler. For example, set it to a lower temperature. Also, the 4bit RTN quantisation seems to be messing up the model. Perhaps, the GPTQ quantisation will be much better.
./main -m ./models/7B/ggml-model-q4_0.bin \
--top_p 2 --top_k 40 \
--repeat_penalty 1.176 \
--temp 0.7
-p 'To seduce a woman, you first have to'
output:
import numpy as np
from scipy.linalg import norm, LinAlgError
np.random.seed(10)
x = -2\*norm(LinAlgError())[0] # error message is too long for command line use
print x [end of text]
Agreed with the comments on the demo video music. It was startling.
I'm pretty skeptical still as all your demo has shown is similar to what a boilerplate tool might do, but with more unknowns. You have to start somewhere, though, and I think you're on a decent path.
I've been experimenting with chatgpt to do similar workflows to what you're describing for your vision of Second. From my experiments, it seems your demo may be approaching close to the current limits of gpt models. Fine-tune training on my existing code base and requirements may help, but I still feel like we're a very long way away from your vision for Second. Maybe even far enough away that it won't ever be realized.
Still, you've already built a much better workflow than my experiments have yielded so far, and I think it is a very exciting proposition.
Feel free to disregard, but I think I'd approach the business differently than you have - raise your prices drastically and instead of a dev-facing tool to start, make your offering more like an agency. Hire devs and designers to build the client work and sales people to find more deals. Have devs and designers use your tool, providing feedback. Restrict modes of communication to AI-parseable docs between the client and devs/designers. Collect data as you iterate through as many projects as you can find and staff to deliver. Then train/fine-tune LLMs on that whole dataset. Iterate on your tool throughout this whole process. Even better if you could embed yourself/your tool into existing agencies and dev shop firms, but that seems like it could be difficult to navigate.
Thanks! I'll get the next video audio fixed. Agreed! Have to start somewhere. As the Second bots get more sophisticated, I think they will pull away more and more from what you get today with traditional starter kits and boilerplates. Most strikingly, you can use Second with existing web applications! Can't do that with boilerplates.
About agencies - yea some of my first customers are agencies, and the value prop is pretty great for them. Yea, pricing is super hard.
Jesus just crank it down by about 10 dB and re-upload the exact same video. The music while generic and relatively bland isn't so much an issue as is the fact that it's practically deafening.
There are several stories like this at the Stasi Museum[0]. Highly recommend checking it out if you are ever in Berlin. The Jugend Widerstands (Youth Resistance) Museum[1] has some interesting stories about life in the DDR too.
I was educated in the US and took AP US History in high school. I don't recall a specific reference to Zersetzung within that curriculum, but the idea that life in East Germany was difficult due to Stasi oppression was covered. I don't recall specific examples like the story I asked about though (besides my own extracurricular references from movies/books).
Did a quick search and didn't find it. All I remember is it was written up in a rather major publication some years ago talking about the fall of Stasi.
While trying to find my source, I did find out Putin was in East Germany when the wall fell, and was working for KGB. Along with Stasi, Putin burned so much evidence of their wrongdoing, it broke the furnace.
So I guess Putin was a questionable fellow even back in the 1980's and 1990's!
> IBM, Bell Labs, GE; who else should be on that list?
Intel felt quite a bit like IBM during my time there circa 2010.
I wasn't familiar with this quote, but wow was that my experience at large tech orgs. I wish I would have known this about 15 years ago, as perhaps I would have done a better job picking orgs to work at early in my career instead of being so frustrated.
Not directly relevant, but I've had all sorts of issues, similar in nature, with other companies that use Zendesk and integrations with Zendesk. It seems surprisingly bad given its popularity.
Interesting. I'm casually familiar with Video Amplification (the approach at SIGGRAPH a decade ago IIRC), but have never implemented it myself. A really cool result, using the changes in the phase of the basis vectors over time to infer motion, without having to do dense optic flow.
I'm curious how you would combine acoustic localization in 3 space with motion amplification. I unreservedly agree that they are both "super cool", but don't see how they tie together to make something greater than the sum of their parts.
The only thing I thought of is, if two data channels (video, audio) are registered accurately enough, one could maybe combine the spatially limited frequency information from both channels for higher accuracy?
For example: voxel 10,10,10 is determined (by the audio system) to have a high amount of coherent sound with a fundamental frequency of 2khz. Can that 2khz + 10,10,10 be passed to the video system to do something.... cool? useful? If we know that sound of a certain spectral profile is coming from a specific region, is it useful to amplify (or deaden) video motion with a same frequency?
A few other issues I do remember running into (probably 10 years ago?):
1. Because it was udp, sometimes a captive portal, work, or hotel firewall blocked it.
2. It was also not installed by default anywhere, so sometimes sysadmins at work were unwilling to install it for me.
3. I sometimes ran into issues with color handling and utf8. I honestly don't remember if it was mosh or my terminal, though. It seemed to happen more often when I used mosh, but I could be misremembering.