To me it's a very obvious image processing pipeline, reading the image, grading (in image processing terms this means color work, e.g. adding a hue to the highlights), doing a transformation (e.g. some rotation, or similar), grading again, and finally writing the result to disk.
If you wonder what "exactly" they are doing, then unless you open each box (like the 2nd "grade" box is open), they're not meant to tell you. Closed, they're meant to give a high level overview of how your functions etc are connected.
Think of it as seeing your code in your programming editor with the functions folded.
The second example is also obvious. There's a selectable (by dropdown) input source, currently set to read posts from Twitter. Posts read are then fed into a (switchable on/off) filter, that either passes them all, or only keeps the one from Europe.
Of course you need to spend a few minutes (or days) to familiarize yourself with the visual language (like you would for new terms if you met a new programming language), e.g. how the toggle icon means a "switchable" option.
Ok, so maybe I'm being a little over the top when I say "I can't understand what a single one of those diagrams is actually doing". I'm not stupid, and sure, I can grok 'em if I spent enough time digging into them.
My point is - I could either waste time piecing together a visual diagram like that (which is going to be fairly different visually from use case to use case - e.g., image processing vs. text processing, resulting in more mental overhead) - or I could look at code, which is (for me, at least) simply much quicker and easier to understand (in many cases).
Furthermore, these are idealized examples. Start throwing in error handling. Concurrent processes. I predict that diagrams that can fully capture complex systems will lean far more towards "visual overload" than not, and really, how useful is a tool like this at that point?
Maybe something like this could be fine if it's just high-level processes, but it just falls apart the more detailed things get.
Hi! I'm one of the guys behind Luna. I completely see your point of view. Working with visual grpahs is something you have to learn to do, exactly the same way like writing a code. Unlike code, diagrams are thought more obvious for people without really strong technical expertise. I cannot agree with one of your points - namely that with each use case diagrams would look differently. Nodes have a position on 2D whiteboard - just like text is layouted in a file and displayeb by your favorite editor. The colors in the graph indicate the type of data and all the variable names are just hidden, because you don't have to manually track where they have been used. If you just look at the lines, you instanlty knows what is happening to that chunk of data, no matter if you're processing text, tweets or programming hardware with it :)
Show me how you support error handling or concurrent processes. Heck, show me nested iteration or even basic recursion - then show me all of that, scaled up to real-world complexity.
My suspicion is that as your model approaches the complexity of real-world systems, the visual language you're creating will be unable to efficiently capture that complexity.
People have been doing "real world" (commercial products used by hundreds of thousands of people) in environments like Max/MSP and Reaktor. There are also lots of huge commercial done projects (Hollywood movie CGI and complex behavior) programmed in node based compositing and 3D environments -- heck, people even use Quartz composer for prototyping projects.
Moreover, nothing prevents you from writing code, as they say Luna maps 1:1 to the code they show next to the diagrams.
But when you want a higher level view of the whole program architecture, code can't show you that -- with code you only get some source file or another, and even that rarely all of it in a single screen. A visual representation can, and it can group larger units in single modules (e.g. your packages).
>My suspicion is that as your model approaches the complexity of real-world systems, the visual language you're creating will be unable to efficiently capture that complexity.
One can even make a trivial visual language of 2-3 node types that is turing-complete and can capture ANY complexity. Luna is, of course, far better than that. As for sufficiency, this depends on the level that one wants to play. You might still write code at the lower function level, but if the language makes it easy to group functionality in larger visual units, it would be great for the larger structure, and also great for configurable parts meant for non-coders.
> One can even make a trivial visual language of 2-3 node types that is turing-complete and can capture ANY complexity. Luna is, of course, far better than that.
I don't disagree with you on the first part - that you can make a trivial visual language that can capture any complexity. The challenge is - can a user "visually" parse through the visual representation of that complexity?
I haven't seen any evidence that Luna can effectively do that at scale. Until I do, I remain extremely skeptical.
I would love to prove you are wrong :) I've described error handling in a reply to another question here and we'll update the website later to illustrate it there too. All other stuff like nested iterations or recursions will be shown in our documentation and will be available for alpha testers. But there is no magic involved - I'll shortly describe the mechanisms here too. Each function is a node and each function usage is a node as well. If you are defining function, you can dig inside the node to define the graph of its body. You can use in this graph a node representing usage of exactly this function. I completely understand that textual clarification is not enough here (heh, just proven graphs are easier here! :D), but live demos and examples will be available for alpha testers now.
What exactly looks difficult to parse about the visual example going:
read -> grade -> transform -> grade -> final touches -> write
To me it's a very obvious image processing pipeline, reading the image, grading (in image processing terms this means color work, e.g. adding a hue to the highlights), doing a transformation (e.g. some rotation, or similar), grading again, and finally writing the result to disk.
If you wonder what "exactly" they are doing, then unless you open each box (like the 2nd "grade" box is open), they're not meant to tell you. Closed, they're meant to give a high level overview of how your functions etc are connected.
Think of it as seeing your code in your programming editor with the functions folded.
The second example is also obvious. There's a selectable (by dropdown) input source, currently set to read posts from Twitter. Posts read are then fed into a (switchable on/off) filter, that either passes them all, or only keeps the one from Europe.
Of course you need to spend a few minutes (or days) to familiarize yourself with the visual language (like you would for new terms if you met a new programming language), e.g. how the toggle icon means a "switchable" option.