I've been using http://qaz.wtf/u/convert.cgi for ages. Little bit more dated interface, but same idea. That utility also does some pseudo-alphabets using characters from other scripts.
I'm a fan of the X1 Carbon line; my current home laptop is a 3rd-gen that I grabbed on eBay for cheap a while back. Might leapfrog to the next model when it's released, but the "Alexa for PC" integration is kinda sketching me out.
I know it's just factory-installed bloatware at this point, but I'm concerned that before long Amazon's going to be paying off OEMs for physically integrated spy hardware, and while I love my ThinkPad systems, Lenovo's willingness to include Alexa makes me nervous.
Interesting - since I'm not on Windows, I didn't know about Alexa integration.
On the other hand, Lenovo seems to use hardware components with good support by open-source drivers. Hopefully one day they (or someone else) will support something like coreboot, which will create a more open system (minus microcode in the CPU).
No offense, but "let's give the poor people cryptocurrency" reads a lot like the start of a satire piece, a lampoon of out-of-touch Bay Area tech culture denizens.
Ubiquitous WiFi kiosks with inbuilt cameras, huh? Ostensibly, they're not allowed to track individual user locations, but the combination of functions present in the devices means it would be trivial to start connecting independent databases and building profiles on users. If you were a malicious actor, how would you go about it?
The PoCs on tracking individuals with MAC addresses are old news (and, in fairness, newer iOS devices use random MAC addresses for WiFi probe requests), let alone the user fingerprinting you could do on browsers when people actually use these things. So you've got a database of devices, and then on top of that you start doing facial recognition and gait analysis to collect another set of individual data points. Then you connect devices to people, and you have a pretty nice system for tracking individuals moving through the city, even those with location services and such disabled.
Paranoia? Maybe, but it wouldn't take much for say, Amazon, to start doing this. And Bezos wouldn't be bound by city regulations on citizen privacy.
I interviewed with a NYC company that bought a bunch of old phonebooths and was looking into installing cameras on them and recording video to sell to whoever they could. They had this idea that they could do style recognition on clothing and sell that.
> The PoCs on tracking individuals with MAC addresses are old news (and, in fairness, newer iOS devices use random MAC addresses for WiFi probe requests)
MAC address tracking is obsolete. All phones (including iOS devices) broadcast a full list of SSIDs that they have previously connected to when attempting to connect to wireless networks. That alone is enough to uniquely identify most people.
> Prevents your smartphone or tablet from leaking privacy sensitive information via Wi-Fi networks. It does this in two ways:
It prevents your smartphone from sending out the names of Wi-Fi networks it wants to connect to over the air. This makes sure that other people in your surroundings can not see the networks you’ve connecte to, and the places you’ve visited.
If your smartphone encounters an unknown access point with a known name (for example, a malicious access point pretending to be your home network), it asks whether you trust this access point before connecting. This makes sure that other people are not able to steal your data.
Last I checked, they did because it's part of the actual spec, though if anyone has definitive evidence to the contrary (either for iOS or flagship Android phones), I'd be curious to see it.
You know, I totally forgot this was a thing. I'm sure modern phones do it. Last time I was on an airplane, couple months ago, I was messing around with airmon-ng, and I was amazed at the amount of personally identifiable information that people's WiFi drivers were just spewing into the ether.
>Vim's commands and keybingings interface (not the UI).
I recently decided to finally get good at Vim, but the UI, being text-only, can be charitably described as "awful."
Solution: Sublime Text 3 with the NeoVintageous[1] plugin. Takes a beautiful, highly customizable, extensible editor and adds most of the Vim bindings. I can do all of my main development work in a gorgeous editor, and when I have to hop onto an unfamiliar machine, I'm still good at using Vim.
You may want to check out SpaceVim. I've been developing on a remote machine lately; before I was using ST2/3. I could have continued to do so using (e.g.) sshfs, but I think that I'm probably more productive this way. One of my favorite features so far is that the terminal is bound to SPC+', and that lets you easily yank and paste and navigate using the vim tools. It's not quite as pretty as ST or Atom. Powerline fonts and a 24-bit color terminal emulator are recommended.
Do you think fluid simulation is advanced enough at this stage to simulate something like, "Assuming sea level rises 6 feet at the coast, this is how the surge will propagate across the landscape"?
Or will it be more like "If 6 feet of water is in this (fairly localized) area, this is how the local region will be affected"?
Possibly, though I imagine the complication there is getting a good enough base geometry to be happy with your simulation.
You could cut some corners by starting from the national floodplain data, which has a lot of engineering hours already in estimating the floodable area for a region, based on water height (and is what your area's flood insurance rates are based on).
I guess there are two types of modelling of interest in the video: firstly the modelling of where the water will go, and secondly the visualisation of how it would look at a particular location. For the modelling of where the water will go it's fairly simple and well-known at this point: you take a raster digital elevation model, where the value of each cell represents average elevation across the cell or in some cases elevation at the centre point of the cell, and then you effectively pour in water at your starting cell/cells, (which would be the coastline in this example) and let it propagate across the raster according to elevation. Some of the techniques are covered here: https://www.supermap.com/EN/online/Deskpro%206.0/SDTechTheme...
The limiting factor (as another poster has commented) is likely to be the resolution of the data available, as processing power is generally good enough these days to handle any reasonable scenario - the study area is limited to the area of the storm and adjacent areas, and there are few areas where there will be a comprehensive DEM available to e.g. sub-meter accuracy. With increasing availability of drone LIDAR data we may well get comprehensive data that pushes the boundaries of what we can do, and which will require generalisation to coarser resolution. I suppose when the effects of even a few inches of water can be so catastrophic in terms of insurance claims increased resolution could be extremely valuable. For example I know that insurance companies already have sophisticated models of risk where flood risk is just one of many layers of modelled risk that they combine to avoid having too much risk in any one area.
The second type of modelling is the 3D modelling seen in the video. With a good 3D GIS dataset as seen in e.g. Google Earth you could take the results of the analysis described above and use this to determine which areas will be affected by flooding. It would then be a case of adding a layer powered by fluid simulation with the other layers in the dataset set up as 3D entities around which the fluid must flow. The point is that once you had your data set up as well as your fluid modelling, you could zoom to any location and view the results in that location, and do this on the fly - whereas in the video it is almost certainly a one-off canned job for a single location.