Hacker Newsnew | past | comments | ask | show | jobs | submit | nuitblanche's commentslogin

Hi,

I am Igor Carron, the CEO of the company.

Our technology is different than the one used by Lightmatter and Lightelligence. We currently do not do integrated photonics.

We are starting from the ground up by first building a photonic co-processor that does one function (to be extended) of general interest to AI. the current function is a random projection. We believe that it will be complexified over time as we grow. But our particular focus is on those computations that are difficult to perform in electronics. In particular, our recent focus has been on Transformers/Large Language Models where we think our technology will be a game changer in the training and inference of these gigantic Machine learning models. For the training part, we published two papers at NeurIPS last year in the main conference. We also presented a more detailed one at one of the workshop: https://arxiv.org/abs/2012.06373 We are going to present a poster at HotChips this year about our foray in HPC: https://arxiv.org/abs/2104.14429 Other papers in AI are listed at the LightOn AI research page: https://lair.lighton.ai/

Physically, speaking, the Optical Processing Unit fits into a 2U rack (currently our focus has not been to miniaturize yet, we avoid permature optimization) that is connected to a CPU bus through a PCIe connection.

The technology has been running for the past three years anf half in datacenters for the oldest prototypes (we just decommisioned one a month ago or so) with Machine Learning computation unit tests being performed every ten minutes during that time period.

We are making the technology available for researchers to use on our LightOn Cloud cluster (https://cloud.lighton.ai/what-is-lighton-cloud/) and people can even rent one or several OPUs (LightOn Appliance: https://lighton.ai/lighton-appliance/ ).

I agree, we need to redesign our website.

Igor.


Let us note that, from the article, this translates into only 3 percent of total electricity generated nationwide. This is in large part due to the inability for wind power to generate a constant power load. From what I recall the nuclear power plants can produce a load of about 100 x 1400 MW = 140,000 MW and this is estimated to be about 20 percent of the elctricity produced nationwide.

Roughly speaking it looks like the nuclear power plants have three times as much installed capacity than solar and produce 6 times as much power to the grid. Since the nuclear power plants are near 100 percent capacity, I wonder what is the reason why wind has roughly a 50 percent production capability: is it maintenance or wind availability ?



This is so dumb! I stopped reading after they mentioned this e-cat worked with hydrogen.

Hydrogen is not an energy source. It is not available in nature and has to be produced with .... power.


I'll translate that into something a bit more correct: "Hydrogen gas is not an abundant energy source. While there are some (rare) natural sources of hydrogen, it's usually made artificially, which takes energy. The energy cost in production is therefore more than the gain in burning it."

That is, however, moot, since hydrogen fusion turns hydrogen into helium. (Note that there was nothing in the article showing a detectable trace of helium trace; that being a key data point in the Fleischmann and Pons paper which was eventually tracked down to coming from another lab in the same building.)

Fusion consumes hydrogen, and gives off a lot more energy than needed to disassociate hydrogen from water. This hydrogen can then be used as feed for the fusion step.


Yeah, the utopian horseshit shoehorned into this article notwithstanding, the primary source of loose hydrogen is currently fossil fuels.


This is only true if you're going to burn it (chemical), not if you're going to fuse it (nuclear).


I developed my answer there:

Are Perceptual Hashes an instance of Compressive Sensing ? http://nuit-blanche.blogspot.com/2011/06/are-perceptual-hash...


I agree. Briefly looking at the description of it, it is some sort of compressed sensing. The differences from traditional CS are minimal in fact, but the scheme is in line with some of the work undertaken in manifold signal processing. The differences are: - the proposed hash is deterministic, generally in CS, you want to rely on random projections (yet there are some results for deterministic problems) in order to get some sort of universality and by the same token some sort of robustness. - step 3 and 4, are the most fascinating steps because they are clearly one of the approaches used in manifold signal processing for images. To summarize, in order for pictures to be close to each other on a manifold, you really want to defocus them. I'll put something on my blog on the matter. This is the reason why the has of two images next to each other are close in the "hash" or manifold space. - for one image, the hash seems to provide 16 measurements (16 bits of the hash result). That would be OK if the initial picture was at the size and color of the picture after step 1 and 2. So in effect, that information is lost. However, in CS you also have "lossy" scheme such as the 1-bit compressed sensing approach (there you retain only the sign of the measurement!, i.e. a little bit like step 6). The reconstruction of these 1-bit pictures are not the original but they are close).

(ps: I write a small blog on CS).


I ahve compiled a long list of online talks here: https://sites.google.com/site/igorcarron2/csvideos

start from the bottom.


In compressed sensing, we don't do guesses. Guesses are reserved for inpainting.


FWIW, some of these algorithms: Bloom filters, Error Correcting Codes are connected to the Compressive Sensing.


The Wired articles gave the wrong impression. It gave an example of inpainting (the Obama picture) that is NOT compressed sensing. Compressed sensing on the other hand can be applied directly to MRI because the MRI machine actually picks up all the spatial information. It samples randomly in the Fourier domain thereby having access to all the spatial information needed to fully reconstruct an image. It used to be that the reconstruction algorithms were not that good before (they relied on SVD/least square). Candes, Tao, Romberg and Donoho then published papers showing that the reconstruction could be done in a totally different way AND it was exact. With these new reconstruction algorithms something like MRI data is acquired in a much more efficient manner than four years because of compressed sensing.

For mor eon the controversy with the Wired article: http://nuit-blanche.blogspot.com/2010/05/compressed-sensing-... http://nuit-blanche.blogspot.com/2010/03/why-compressed-sens...

The new reconstruction solvers: https://sites.google.com/site/igorcarron2/cs#reconstruction

Hardware that are implementing compressive sensing: https://sites.google.com/site/igorcarron2/compressedsensingh...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: