Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway12357's commentslogin

> ssl: Remove default support for SSL-3.0 and added padding check for TLS-1.0 due to the Poodle vulnerability.

> ssl: Remove default support for RC4 cipher suites, as they are consider too weak.

I'm not following Erlang news but was just wondering, aren't these fixes coming out way too late?


You could tell the `ssl` application to avoid SSL-3.0, TLS-1.0 and RC4. Here is an explanation by RabbitMQ:

https://www.rabbitmq.com/ssl.html

It is different from an end-user application in the sense that you can configure this in a safe way, and that has indeed been the typical workaround.

Now we just make it impossible for people to misconfigure this in any way.

The fix was also backported to 17.5 w.r.t the padding for TLS-1.0.

Another point worth mentioning is that Erlang/OTP uses OpenSSL, but only for the cryptographic ciphers. `ssl` is a complete standalone implementation of TLS in Erlang and this automatically avoids a lot of trouble. The common case is that an attack on OpenSSL leaves the `ssl` application unaffected by the error.


I'm sorry but I can't help to find this situation ridiculous.

Unless you're from a very poor country there is no excuse for not having a working keyboard.

I don't know if that was part of the test, but if it was, it's worse than the big blue chip corps asking about the number of piano tuners. Which actually can be valuable at understanding how one reasons about unknown problems/areas.

About the server being slow, well I don't know the magnitude of the slowness or the anger, but unless you're a ramen fueled startup there is no excuse for having slow machines. It's management failure. It's a waste of developer time. Instead of coding the dev is having to deal with stress inducing constant 5 second hiccups or similar things.

Put yourself in the interviewee's shoes. Do you really want to work in a company that can't conduct a proper interview and has broken/slow hardware?

And yes, I do use vim and I do like w very much.


> Do you really want to work in a company that can't conduct a proper interview and has broken/slow hardware?

Slow is relative. The employee in question was trying to figure out why a remote server was experiencing extreme slowdown. He ssh-ed in, but he was able to type far faster than the beleaguered remote could echo his keystrokes. So he needed to just carefully type his commands, wait for them to appear, and then press enter. Instead, he typed angrily and too quickly, swore at the connection, and eventually started slamming his keyboard in a fit of pique.

It was a totally reasonable real-world slow machine problem, and a totally useful insight into the mindset of a potential new employee.

Not egregious at all. We're developers, sometimes we have to walk into an annoying situation and deal with it like adults.


Yeah but poor guy - in an interview situation the pressure is different and public. He might have done fine at his own desk.


It wasn't during the interview. According to knodi123's first comment it was during the guy's first week on the job.

>Joking aside, we once let a guy go during his 1 week probationary period because he got a little too angry at a slow server.


I have been forced to work with really old computers connected to as old research hardware. At some point it is probably more economical to let someone figure out the interface and solder something together with an Arduino or similar so we can start using a new computer. But that point is never now. Until then we have this old chain of hardware just to get the data from the old machine via 5 1/4 floppy disks.


tip: messages should fade after 15? seconds


If you happen to get back to this thread and do have the time then please post it. I would appreciate it very much since I'll be taking that route. Thanks.


> that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes.

I did a quick googling [1] and

> Our algorithm divides the problem into independent `quadrants' ...

> Our results show that our GPU implementation is up to 8x faster when operating on a large number of sequences.

It's still soul crushing. Why did our genome have to be that long :(

BTW, do you have numbers for setups with hundreds of GPUs?

I'm also left wondering about results using stochastic solutions. On how accuracy and problem size relate.

[1] http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=tru...


I'm confused by this. 1,000 years seems a bit steep to me.

Suppose you have two files, `bob.genome` and `mary.genome`. Let's say they are 1gb each [1].

I think I can diff two 1gb files in less than 1,000 years.

diff(1) shows "deletions, insertions, and substitutions".

Therefor, I don't believe it. Yet. What did I miss?

1. http://stackoverflow.com/questions/8954571/how-much-memory-w... (Rounded up because Fermi estimation [2].)

2. https://what-if.xkcd.com/84/


> diff(1) shows "deletions, insertions, and substitutions".

diff(1) doesn't give you a _minimal_ set of edits to apply to go from one file to the other, just _a_ set of edits.


Also, I think he's picturing two almost-equal files. In that case the average running time should be way lower, no? (I believe the quadratic time is worst case)


> two almost-equal files. In that case the average running time should be way lower, no?

Yes, indeed! What you're thinking of is "output-sensitive" algorithms. There are some output-sensitive algorithms for edit distance. The fastest one I found is in "Improved Algorithms for Approximate String Matching" by Dimitris and Georgios Papamichail. They note:

"We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s-|n-m|)min(m,n,s)+m+n) and linear space, where s is the edit distance between the two strings."

http://arxiv.org/abs/0807.4368


Cool. The naive approach yields O(exp(s)*n), this looks like a nice improvement.



Yes, but complexity theorists are mostly interested in worst case analysis I believe (which I think is rather unfortunate) -- so the quoted numbers are probably what you get from plugging in 100,000^2 * dt or so.


It's also got a lower quadratic bound, it has to fill in the whole matrix of m*n cells, at least for the simple implementation. There may well be some optimisation for almost identical strings.


1,000 years? Really?

Isn't it more likely that somebody misquoted "slow, like a half an hour" as "slow, like a THOUSAND YEARS"?


  >>> 1000 / ((1024. ** 6) / 3e9 / 86400 / 365)
  82.0593593076069
The algorithm is quadratic in the input size. For a Gigabyte of data, that's 1024^6 operations. Dividing that by 3 * 10^9 operations/second (assuming a 3GHz CPU), 86400 (the number of seconds in a day), and 365 (the number of days in a year), we obtain the runtime (in years) assuming that comparing a single byte takes exactly one operation. Dividing 1000 by that number, we get ~82 operations to compare a single byte, and that doesn't look unreasonable.


They're quoting exponential (2^N), not quadratic (N^2) time.

If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.


That is yet another section of the article, the thousand years clearly reference the edit distance, which is quadratic.


from the description of "a grid ... flooding diagonally" I think it's not comparing two 1gb strings it is comparing every substring of those strings

  for i := 1..bob'length loop
    for j := i..mary'length loop
      editdistance(substring(bob,1,i), substring(mary,i,j));
    end loop
  end loop


>BTW, do you have numbers for setups with hundreds of GPUs?

I saw a talk on it a while ago, I can only remember they were using CUDAlign and Smith-Waterman (the basic idea is the same). Doing some googling too this seems to be a reasonably recent work with GPUs and CUDAlign (DOI 10.1109/CCGrid.2014.18).

>I'm also left wondering about results using stochastic solutions.

Another talk, I think they were running Smith-Waterman too. The speculative part was during the traversal of the matrix to get the edit distance. It's not the most time consuming part of the algorithm. I got in late for the talk and I didn't get to hear what they did about filling the matrix in the first place, but I imagine they might have done something similar. I'm not very familiar with Smith-Waterman so I can't go into details.


While that page in particular may look like a Thomas Ptacek fanpage :) that doesn't invalidate its content. So thanks for sharing.

Anyway what I really wanted to say was that, for those who missed the link, this page has has a link to a meta-"awesome" list:

https://github.com/sindresorhus/awesome


Yeah, tptacek's Amazon recommended reading list was merged in last month: https://github.com/paragonie/awesome-appsec/commit/097a1ddba...


Yep, and it probably deserves its own Show HN entry from the author. I bet it would get a lot of positive attention.


I started reading that as

> Squats and deadlifts are how I broke my back

While I take care to contract abs at all times and keep a straight back during SQ, DL and MP, I have been getting some lumbar pain/discomfort. Not when I lift. But typically 1 or 2 days after.

Sleeping is not optimal either as muscles contract. As of lately I find myself sleeping on my side. I know it's the advised posture, but some years ago I could sleep (ie: eventually wake up) with my stomach down with no problems. I can still go to sleep that way, but during the sleep I end up switching position -- and I'm aware of it because I "wake up" for a brief second or so.

Anyway, do you have any tips? Do you ever get sore lumbar spine or overfall back?


I have been trying to do 5 minutes a day of "ignorant meditation" (just sitting on the floor, clearing up my head, and focusing on breathing), but have seriously slacking off these past weeks.

Are there any worthwhile youtube videos on mindfulness meditation?

And is Vipassana meditation "better" than the other types?


I found the iOS "Yoga Nidra" app to be a good starting point. There is a trial version that gives you a 10-minute walkthrough.


What would be the impact on the Euro currency if Greece left the Euro-zone?

I would guess the Euro would take a small hit, but bear (speculation) wouldn't have much too feed upon.

This due to the fact that (1) Greece is a small part in the Eurozone and (2) the other countries that were having problems (Portugal, Spain, Italy, Ireland) seem to have things stabilized.


I imagine the effect would be 'is this contagion'?

But perhaps a fear of bipolar contagion - will other weaker economies leave (and the Euro become increasing like the Deutschemark - high, stable, low interest rates) or will Germany leave (and the Euro become a currency associated with volatility).


Reading this I can't help comparing it against SpaceX.

SpaceX is actually 2 years older than Virgin. But SpaceX is Getting Things Done at warp speed for some years now, while Virgin Galactic keeps having multiple crashes. Despite SpaceX having a harder mission.

Is it all due to the Elon Musk effect?

What's the secret?


I just figured that Virgin Galactic doesn't have nearly the resources that SpaceX had or has. Getting stuff into orbit with hundreds of millions of dollars to spend would be easier than getting stuff into sub-orbit with peanuts to spend.

But then I checked Wikipedia and:

"After a claimed investment by Virgin Group of US$100 million, in 2010 the sovereign wealth fund of Abu Dhabi, Aabar Investments group, acquired a 31.8% stake in Virgin Galactic for US$280 million, receiving exclusive regional rights to launch tourism and scientific research space flights from the United Arab Emirates capital. In July 2011, Aabar invested a further US$100 million...."

On the SpaceX side, according to this page, SpaceX spent $390 million developing Falcon 1 and Falcon 9, total:

http://www.parabolicarc.com/2011/05/31/nasa-analysis-falcon-...

I don't know what the difference is. Maybe this is another example of how people should stop trying to use airplanes to get to space.


The difference probably has to do with the fact that SpaceX has a functional revenue model beyond what's essentially a pre-order. Further down in the Wikipedia article for SpaceX, as of 2012 they had taken in over $4 billion in lifetime revenue. Also, they got a $1 billion investment from Google and Fidelity in exchange for 8.333% of the company this past January.

I think the answer really is money, SpaceX has more of it because it has built a product it can actually sell right now.

[1] http://en.wikipedia.org/wiki/SpaceX#Funding


That seems backwards. They didn't have ongoing revenue until they proved themselves capable. Their early days were a similar situation to Virgin Galactic now, but, apparently, with even less money and doing harder stuff.


Space X and Virgin Galactic are building completely different products. The only commonality is that they both go into Space (and Virgin Galactic barely even that).

SpaceX is not actually doing much in the way of 'new', to date. Their main product is an improved rocket engine design, but it's just an improvement on an existing product. (The self-landing rockets are amazingly impressive, but they haven't managed to achieve success with it yet).

Virgin Galactic is a completely new design (launching a passenger rocket plane in mid flight), at least in the non-military sector. That's a whole world of new unknowns they are having to overcome and, when you consider that, it's incredibly ambitious what they are hoping to achieve. They are less likely to achieve success as a result unfortunately. But well done for being the first to try.

The other big difference of course is money. SpaceX has way more funding behind it, and a much greater commercial potential.


The X-15 was a rocket plane that went into space in the sixties. It was operated by the air force and NASA. I don't know if there's a big difference in the novelty compared to more traditional rockets.


Slightly different requirements.. X-15 was single seat, no provision for sight seeing, trained test pilots that could handle high G forces, etc..

Also, X-15 had much more tolerance for failure, frankly. When you're doing something for the first time with military test pilots, you can be 95% reliable. Virgin has to be nearly 100%, and that last 5% is a bitch.


In 2008 SpaceX had a terrible year (3 launch failures in a row) and almost went bankrupt http://www.space.com/5693-spacex-falcon-1-falters-time.html

edit: along with Tesla. interesting story: http://inspiremore.com/in-3-days-elon-musks-tesla-motors-spa...


> Virgin Galactic keeps having multiple crashes.

Uh, what? This is the only crash they have ever had. They did have an accident on a test stand in 2007 when a tank blew up, but that wasn't a crash.


Why is the SpaceX missions harder? Only a couple of aeroplanes have ever gone more than mach 3, and with significant difficulty. In comparison there are numerous rocket launch vehicles with thousands of successful launches. The atmosphere is a very hostile place and aerodynamic control is not trivial.


Getting stuff into a suborbital trajectory is basically a subset of getting stuff into orbit. It's basically like, if you can drive from NYC to LA, you can drive from NYC to Chicago.

It could very well be that Virgin's chosen approach is more difficult because they are using airplanes instead of rockets. But nothing forced them to choose that approach, and it's not the mission. If you find it harder to get to Chicago than I do to get to LA because I'm driving a car and you're using a kayak, that doesn't mean your mission is harder.


Except a car costs $30,000 and a Kayak costs $300 with much lower running costs. Virgin are not trying to be the first to get into Space, they're trying to reduce the cost and make it more affordable.


Lower cost of access to space is SpaceX's big goal as well. So far they're doing much better at it. I wouldn't be surprised if SpaceX could meet Virgin's price for a suborbital trip, if they felt it was worth their while. But they have vastly larger fish to fry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: