If you are feeling the way that James does, that the magic is gone... I encourage you to try creating things in a new domain.
Try making music, creating videos, making interactive LED art, building robots, or fabricating toys.
The tools we have today suddenly make it far easier and more fun to experiment with a new craft. What was once daunting is now approachable.
Start by using an AI-powered tool—without shame—to make something superficially 'cool'. Yes, we all know you used a 'cheat code' but that's okay! Now you get to dive in and deconstruct what you created. Tear it apart and learn how and why it works. Go as deep as your motivation carries you. Experiment, hack, and modify.
Just as in software, there will be many many layers of abstraction that you can work through and learn about. Many of them are overflowing with magic and sources of inspiration, I promise.
The gears of capitalism will likely continue to aggressively maximize efficiency wherever possible, and this comes with both benefits and very real costs (some of which are described James's post).. but outside the professional sphere, it appears to me that we are entering a new hobbyist / hacker / creative renaissance. If you can find a way to release enough anxiety and let the curious and creative energy back in, opportunities start showing up everywhere.
Cranking out open source tools one day, endurance racing in IMSA LMP2 the next — I love it. Condolences about the DNF, but truly an impressive run nonetheless (especially the first-lap recovery). Kudos to you and your team.
There's a bunch of shady ones that I avoid, and the really big legal ones are blackhawk networks and incomm. Becoming an official distributor requires a lot of vetting from their side though, so it isn't like any joe blow can apply and get approval.
You're not alone in this. It might be a good idea to delete or edit this post; As currently written, it provides instructions and encouragement to make these connections. :)
Splice | Data and Growth Engineering, Product Design | New York, NY, ONSITE and REMOTE (North + South America) | https://splice.com/
Splice is a creative platform of tools and services that helps musicians stay in the creative flow. Our products are used by a community of professional musicians from bedroom producers to Grammy-winning artists to make better music and to reduce complexity and self-doubt.
We're hiring for a range of engineering and leadership roles, particularly in Data, Growth, and Product Design. See Greenhouse for the list of open positions: https://boards.greenhouse.io/splice
Our technology stack is primarily Golang and Javascript (and Angular on the front-end, but we work with a wide array of technologies. Our primary office is in NYC, but we have a distributed team and are open to remote hires. We also have a dedicated, professional-level music studio in our NYC office (open to employees), and many members of our team are accomplished artists who actively make music.
If you have questions, you can reach me at [my username]
@ splice.com. I am a hiring manager for some of the roles, and can connect you with other hiring managers as needed.
The smoothness is a huge improvement, but touch-latency and scrolling physics are still huge problem areas, even in Jelly Bean on Nexus 7 hardware.
In testing, I've enjoyed the Nexus 7 form-factor, but the iPad's responsiveness and scroll behavior are such a relief when I switch back. It was immediately noticeable, even when beta-testing an app on my old iPad 1 today.
There should be rock solid high performance graphics drivers for every Android phone. There aren't. The chip manufacturers aren't helping because they won't let 'regular' people get access to real documentation, Google isn't helping because it won't put 4 or 5 engineers on it full time. ODMs don't do it because if Google doesn't do it why should they, they take the crappy vendor supplied driver and run with it.
So far there is no penalty to a chip vendor for having crappy video drivers. This is an area where nVidia invested in strongly to win the PC hardware space but has not done squat in for the Android space, at least it isn't visible outside nVidia.
The Nexus 7 is the first Android tablet to even kinda sorta look like the way the iPad moves. This is something that totally confounds me about Google's internal process.
Speaking as an ex-GPU driver guy, you're off by an order of magnitude about the number of engineers it takes to make a solid graphics driver. For a good GPU driver that makes smart choices about display, power management, OGLES, etc, you're looking at 30-40 people, not 4 to 5. Multiply that by every chipset for Android available (NVIDIA Tegra, ARM Mali, Qualcomm Snapdragon, Imagination PowerVR, Samsung Exynos assuming they build their own GPU eventually), and it's prohibitive to do that all within one company. It's also harder for Android than iOS because of the architectural diversity within the Android GPU market. Optimizing for a TBDR like PowerVR is different than optimizing for a standard renderer with a Z-buffer. iOS can make a bunch of assumptions that Android can't due to dependence on a single vendor.
This is a company that just put $3.5 billion dollars of free cash flow into the bank for one quarter [1]. Lets say we pay each of these 40 people per architecture $250,000/year and we do it for 10 architectures. That is $100M in Salary. Lets say they each burn another $250,000 a year in benefits, extra 401K perqs, their own cafeteria with a chef that does special orders. Maybe that doubles the cost to $200M/year So let's buy them all a house as a sign on bonus if they get this done in 5 years or less, in the Valley that is 1 to 2M$ each for 400 people, call it 800M$. So 5 years of epic salary, a house free and clear, and 5 years of effort comes to about 1.8 billion dollars. One half of one fourth of the years annual free cash flow, so a bit more than 12% of the annual free cash flow.
In my opinion that is the difference between investing in something strategic and 'hoping it will be great.'
Microsoft doesn't have its own team to write its own GPU drivers. They have conformance tests for third parties and meaningful penalties for drivers that don't conform (no automatic installing, etc). That's the model you should be thinking about, not iOS.
Microsoft is (or at least used to be) pretty smart in how to partner. Reminds me of that Steve Jobs quote along the line of "I wish we were as good at partnering as Microsoft". I'm curious whether their new strategy (basically copying Apple and go vertical) will work out. If it does, we're definitely in a new computing era where commodity devices with highly compatible software is a thing of the past.
That is a great question. And I think the wrong one. But let me share my reasoning and you can tell me if you agree with that or not.
My reasoning goes like this, if you believe the smartphone and tablet is how people will be consuming the types of services Google would like to offer (Search, Social, Etc) then Google needs to be able to deliver and innovate in that market. If the market leader in devices is hostile to Google, then Google needs to either enable a new leader that they can control or be the new leader. Android's strategy of being open has been excellent at acquiring hardware partners but it has not been able to compete in terms of user experience. Apple has demonstrated for three generations (3G, 4, 4S) that user experience dominates the smartphone decision. And the user experience is dominated by graphics and graphics performance.
I don't think three years ago Google could have appreciated just how much the impact that solid graphics support would have in consumer's minds, but now it seems painfully clear to me (and other bloggers who do these types of articles and Anandtech Etc etc) that this part of the equation is key.
I claim that if Google's Android can nearly match Apple software feature for feature in Smartphone OSes, but they don't have a hardware partner that can deliver graphics performance. So by the same reasoning that said "we need to create an OS for smartphones that we're able to compete with" they should now know that "we need to create the complete platform that enables a competitive user experience." The biggest and most stubborn nail sticking up from that problem is effective, high performance, graphics drivers. What is perhaps more important, the emergence of the tablet as a viable platform makes that problem stick out more.
So Google spends 1.8 Billion dollars over 5 years, and as a result goes from having Android devices being 1/10th to 1/8th the market for tablets to being 1/2 or 2/3rds? If they achieve that objective then yes, the rate of the return for that investment will swamp any other use of that money.
The 'do nothing' strategy of having that 1.8B$ sit in cash and cash equivalents for the next 5 years looks to return less than 250 M$ (with at most a 5% annualized rate of return in the kinds of securities they would hold it in)
Makes it look like a simple call from the outside, I completely recognize that it is painfully hard for at least one member of the OC [1] to spend like that, and they've had way too much say on how money was spent for the last 5 years.
But the reason I think it is the wrong question, is that once you start thinking about it in rates of return your value system is dollars for dollars. I think the right question would be to ask "Is there anything holding back the growth of Android that only Google with its resources (cash, brainpower, etc) can fix?" And I think the answer to that is yes, this one.
[1] The "OC" or "Organizing Committee" is Google's equivalent of the 'executive staff' or the 'executive management group'
I think we half-agree on this one. I didn't mean what ROI Google will have strictly in terms of dollars, mindshare/market share/etc are all great to have. However, I don't think that great graphics drivers are as important as you think.
It seems to me that Google can (very cheaply) pick the low-hanging fruit in graphics performance and get 80% of the way there, and then see what they need to do. Also having a "Google" line of devices is very beneficial, as it can be the high end of the Android offerings. I'm typing this reply on a Galaxy Nexus, and I find the phone much better than an iPhone by far (in general, not just in performance, I haven't used an iPhone recently and don't remember how responsive my 3G was).
There's also a false dichotomy in your post that I need to point out. You say that the money will either go to a graphics driver or will sit in the bank, which is trivially false, as Google can invest it in many other ways.
Fair enough. To your last point, this is weak rhetoric but in this case its also a financial tool that is commonly employed to determine rates of return. The comparison is made between investment 'X', and keeping the money around during the time X would be implemented (the 'do nothing' option). It tries to capture the opportunity cost.
To the 80% question, my claim is that this is exactly what Google did, they got 80% of the way there and in phones that worked. In tablets however the additional screen real estate magnified this weakness in graphics below the 'good enough' threshold. It was great that the Nexus 7 made great strides in this area, the Nexus 7 is manufactured by Asus (which made the Transformer Prime, and now the Transformer Infinity) that the graphics did not improve until Google made it the 'Google Nexus 7' (which sounds like they drove more of the decisions) was a problem for the Android ecosystem in general.
My thesis is that 'fixing' it so that Asus and anyone else can build a fabulous graphics experience on Android is possibly the best investment they could make.
30-40 people? How can you get anything done with that many people? Teams of 5-10 people have built entire operating systems. What makes a video driver so complex?
GPUs do a lot more in mobile today than probably occurs to you off the top of your head if you're not extremely familiar with them.
First of all, you have to worry about the actual act of running OpenGL ES applications. This is a non-trivial API with lots and lots of performance tuning required, and don't forget you need a GLSL compiler (which in turn is not trivial at all).
Now, you need to support video decoding. Usually GPUs accelerate this in some low-power way for common formats (H264, etc), so you need to jump through a lot of software hoops to support that for various formats and such. Also, don't forget to add support for HDCP (also not trivial) and content protection throughout your video pipeline, because if you don't, you will be flayed alive by the content cartels.
After that, you need to optimize for power. This turns out to be way more software intensive than you think because a lot of times software has to make the call about whether to pick super-low-power but high-latency mode X or higher-power but lower-latency mode Y. This is also probably where you handle a lot of thermal runaway style cases.
Finally, throughout this entire process, you have to be making sure that everything works. While you're writing the software to get things up and running, any bug you hit in SW could be indicative of a hardware bug. When you do hit something that looks like a hardware bug, it's not like you say, "oh hey let's break out the debugger and see what's up!" A relatively easy hardware bug at this stage of development still probably takes on the order of a few man-months, and a hard one takes well over man-years and millions of dollars (find very short repro case, root cause, investigate software workarounds, design ECO, verify ECO, kick off a new spin).
These are the bare-bones you-can-possibly-be-a-player-in-the-market requirements; how well you do optimizing these various cases will determine how well your chip can do in the market.
How about they target just one chipset, then. Once that one is a lot faster than all the others due to driver quality no one will want to buy phones using the other chipsets until they have faster drivers. People wouldn't have to be technical either - gadget reviewers and early adopters will highlight it for them. Games might only support the better chips or at least recommend them, etc. So eventually no phone manufacturers will use those chipsets until they have faster drivers because people just won't buy their phones. So the chipset people will have to raise the quality of their drivers in order to compete.
This isn't very different to what happened in the PC market over time. Remember how many graphics card makers there used to be before things settled around ati and nvidia?
What might also happen is that the gpus will start to behave in a similar way, giving you some of the advantage that iOS already enjoys. Kinda like directx or certain "standard" opengl extensions.
Android probably doesn't need 100 completely different chipsets. Some number higher than 1 is probably ideal for innovation and competition, yes, but it is obviously inevitable that not every player will survive in the long run anyway.
I suspect that without Google's intervention nvidia will probably win on driver quality in the long term because of their experience in their area along with tegra's current momentum. But you never know - as it becomes obvious that driver quality affects sales some of the other players might catch on in time too.
It looks like this sort of thing is already happening. Phone makers are bringing out less new models per year, there are fewer chipsets being used, android is getting better and better, the android-reskins that phone manufacturers do are getting more minimal... I suspect it will sort itself out in time.
That's not true. The graphics hardware vendors do help (we have a well-staffed team of dedicated Android driver engineers, much more than your 4 or 5) and we work directly with Google's myriad Android graphics engineers. The drivers are as stable and fast as we can all make them, given Android's architecture and the timescales involved with Android releases and customer hardware releases.
iOS devices use the same PowerVR GPU family as many (most?)
Android devices. Is there evidence suggesting they did
their own driver? I guess one could tell by comparing
what kind of GLES bugs exist in iOS vs Android.
I wonder how much of this is hardware versus software. Keep in mind that end-to-end latency has a number of possible sources:
- touch digitizer
- CPU to do something with the input
- GPU to start rendering the result
- display to show the change
In particular, I worry about the first and the last. Apple can get away with using extremely high cost parts in the iPad due to vertical integration (look at the physical size of A5X, for example), whereas Android vendors generally can't. Considering that CPU and GPU are generally selling points whereas touch digitizer performance and display response time are not, it would be tempting for a margin-sensitive OEM to cut corners on those two things.
There is plenty of precedent for this: look at the grey-to-grey response time of the original Xoom.
I always assumed that it was the level of abstraction required for Android to be able to function on a wide variety of hardware. iOS on the other hand is running on a relatively tiny number of devices with hardware chosen for use by themselves.
Even with a difference in milliseconds you'd still be able to discern between "smooth" and "not smooth".
Was it necessary to wear a t-shirt that reads "It's fun to use learning for evil!" in the photo shoot for a Forbes spread? This doesn't help the negative perception of the word "hacker". :-/
All due respect to the work you're doing – I'm a former member of the security industry myself (worked on the IPS engine at TippingPoint).
Counterpoint: I love that you wore it, I think the content of the article makes it hard to come to a negative conclusion (especially the comments about stopping development), and most anything that supports dieselsweeties.com is a good thing!
It's fairly easy to change a T-shirt. Whether or not anyone agrees with his appearance or not being relevant, he wasn't photographed in the audience at the conference or up on stage.
He posed for a photograph in a hotel.
Even if he didn't have a spare shirt, the gift shop in a hotel generally does. That's if he had thought of that issue. No problem with telling the photographer you had to change. Even if they noted that in the story it's the picture that's worth 1000 words.
I had a story done a number of years ago and they sent a photographer to the office. I took several hours to arrange everything to get a good setup for the photo. It paid off. The photo was good and the photo editor liked and made it the centerpoint of a story where many people were quoted. It ran all over in syndication. My point is simply it's important to think ahead when the media comes knocking. (Along those lines hmm, maybe he did the right thing with that t-shirt publicity wise).
In any case people can now learn from the "nitpick" and decide for themselves if they are ever in the spotlight what they want to do.
Forgive me if I'm just naive but I don't get the 'scary' part. Locks have always been 'advisory' and people who have wanted to circumvent them for both good and evil rate them by their 'time to disable'.
Hotel locks with hard keys had their issues as well, and were pretty trivially picked with simple tools. But the key is always that you need to bring the 'simple tools' which is to say that they aren't vulnerable in a way that someone who decides on the spur of the moment to enter the room can easily duplicate. They need the plug that fits the power cord, they need the software which does the JTAG wiggler etc etc.
So if it is 'scary' that people who are not affiliated with the hotel either as guests or as staff can, with pre-meditation, open a hotel room door without damage. Then you need to re-define scary. This has always been true, and will probably always be true by the nature of hotels and motels.
It should be noted that [some] hotel doors with electronic key cards also have physical key holes (as a backup) that are hidden, but are still susceptible to being picked.
This just supports your point that hotel doors are not 100% secure for anyone who really wants to get through.
Edit: Replaced all with some. The doors at the hotels I worked had backup physical keys in case the battery failed. It's cool that Onity locks can be powered externally if the battery fails. Thanks for the correction.
That's not really the case. While some of these do exist, Onity's locks themselves do not contain any physical keyhole and I've never seen them installed in such a configuration. Other vendors may be different.
The most important thing was that you gave it thought in advance! That is good. You had your reason for wearing the shirt it might not be the same decisions others would have made but the decision is yours to make based on what you were trying to achieve.
I mean the vulnerabilities. While my exploit has issues (which, as far as I can tell, are issues with timing when reading data from the lock; I lose the first bit of every byte) it's only a matter of time before someone fixes that and has these rolling off the assembly line. All you need is a microcontroller, a resistor, and a connector; that scares me.
Fair enough, and that's why I attempted to tone down the message with my statement of respect. I've followed Cody's work with interest for years.
I do stand by my general point, though. I think it's worth thinking about how we represent ourselves to the general public. The word "Hacker" has an unfortunate negative reputation, and I don't think messages like this help. It really jumped out at me when I opened the article (otherwise I would have kept this nit to myself).
It's pretty obviously tongue-in-cheek. He doesn't look at all evil (sorry Daeken, you look kind of... Jolly) and any real evil people don't let Forbes take their picture.
At CoffeeTable (http://www.coffeetable.com), we’re combining the best parts of commerce, catalog shopping, and tablet devices to create truly inspiring shopping experiences. Whereas the ecommerce giants like Amazon and EBay are all about searching and comparing technical specs, we’re putting the fun back into shopping. Discover products, shop with friends, and get that same special feeling when you walk into a store and they know your name, your size, and exactly what you’d like but didn’t know it.
Referral Bonus: Refer a candidate that we hire, and win a new, top-of-the-line iPad 3! (64GB, Wi-Fi + 4G)
Looking For:
* Senior iOS developers
* Server-side developers (CT is a Rails shop, but love Python/Django devs too)
* Front-end web developers
CoffeeTable is a small team (2 developers) looking to grow in a big way. New hires will have a huge opportunity to make a big impact across the board, from product direction, to design, to architecture.
Well funded ($2.5MM Series A from Strategic Partners in the catalog industry) and located right across from AT&T Park in San Francisco.
Such a thing has existed since iOS 2.0 (CFUUIDCreate [1]), and Apple's updated docs on UDID specifically recommend using CFUUIDCreate. The problem is that an application-specific UUID doesn't address all current use cases for UDID. Specifically, ad networks that support Cost-Per Install (CPI) need to use an identifier that crosses the application boundary.
Perhaps I misunderstood, but CFUUIDCreate doesn't create an application-specific UUID, it just creates a new, arbitrary, UUID that isn't tied to anything. You could then use that to build your own app-specific UUID mechanism, but the API won't do it for you.
It's relatively straightforward to use CFUUIDCreate as a building block for an application-specific UUID, but you're correct in stating that the API itself simply returns a new, pseudorandom UUID.
Try making music, creating videos, making interactive LED art, building robots, or fabricating toys.
The tools we have today suddenly make it far easier and more fun to experiment with a new craft. What was once daunting is now approachable.
Start by using an AI-powered tool—without shame—to make something superficially 'cool'. Yes, we all know you used a 'cheat code' but that's okay! Now you get to dive in and deconstruct what you created. Tear it apart and learn how and why it works. Go as deep as your motivation carries you. Experiment, hack, and modify.
Just as in software, there will be many many layers of abstraction that you can work through and learn about. Many of them are overflowing with magic and sources of inspiration, I promise.
The gears of capitalism will likely continue to aggressively maximize efficiency wherever possible, and this comes with both benefits and very real costs (some of which are described James's post).. but outside the professional sphere, it appears to me that we are entering a new hobbyist / hacker / creative renaissance. If you can find a way to release enough anxiety and let the curious and creative energy back in, opportunities start showing up everywhere.