http://www.rollingstone.com/culture/news/steve-jobs-in-1994-the-rolling-stone-interview-20110117
Machines have gotten smaller, faster and cheaper.
Software, by contrast, has gotten bigger, more complicated and much more expensive to produce.
Writing a new spreadsheet or word-processing program these days is a tedious process,
like building a skyscraper out of toothpicks. Object-oriented programming will change that.
To put it simply, it will allow gigantic, complex programs to be assembled like Tinkertoys.
Instead of starting from the ground up every time, layering in one line of code after another,
programmers will be able to use preassembled chunks to build 80 percent of a program, thus
saving an enormous amount of time and money. Because these objects will work with a wide range
of interfaces and applications, they will also eliminate many of the compatibility problems
that plague traditional software.
Microchips aren't getting much smaller but they're still getting faster and cheaper. Since they're never fast or cheap enough, cloud and cluster computing have filled the niche as "common sense" solutions. In other words, they're simple and effective patterns to do more for less (in this field). Languages aren't getting much smaller, they quickly fill the interesting and practical space they can reach and often fail to scrach their own "itches".
- One example is C, whose need for code mid-compilation led to its extremely complex and unstructured preprocessor: it's effectively a full language designed after C, for C.
- Another C example is makefiles: "sufficiently simple" structures for dependency specification and resolution. Makefiles easily outpace themselves: there's no "best" place to add structure or metaprogram with them. And so the
make
tool has inspired tools likeshake
that seek to fill in some of the gaps. - Another example is Haskell's GHC, which is built upon rather small and strict structures. The plethora of languages that interlock largely do so on the researcher level. E.g. Core only looks somewhat like C--, and the typing extensions are a bit scattered, though very useful.
- Another example is Ruby, whose small size and elegant archetecture make it very general-purpose, but whose design requires you to write C for performance and thus for efficient strictness.
- Another example is Coq, whose great reseach background and support makes it a cutting-edge formal verification engine, but whose metaprogramming and formal type systems have difficulty relating to the rest of the language.
- Another example is ML, whose module system and metaprogramming capabilities amaze and inspire, but which appears to suffer from lisp's problem of too many good implementations to choose from.
Flock is tiny, but many of the gaps between different areas of computing have been filled. Since I don't have the time or the brainpower to constantly repeat myself on different levels of abstraction, Flock is built on category theory.
- Category theory is great for very abstract and very specific expression.
- It's far smaller than set theory or many other forms of formal logic, and thus it's capable of being far more general.
- People already understand many ways and places to apply category theory, so we don't have to figure out nearly as much from scratch. Since I don't want Flock to be tied to a specific model of computing, I use term-rewriting as the core model of computation (typed lambda calculi have taught us that term rewriting goes well with type systems) Since I know I want to write programs to optimize other programs, I know I want "common sense" profiling built in. Since computers do best with finite, discrete chunks, I've also baked those two concepts into the language: getting type-validity, combinatorial methods, etc. for free.
Flock lets you tune all of these details as parameters. Code interoperability is built in: a Flock expressed is defined as the most general interpretation with whatever is given. All dependencies are as explicit or as implicit as you like: it all fits in Flock in it's place. Flock supports gradual, multidimensional, discrete dependent-typing. No types? Nominal types? Typing up to isomorphism? Categorical types? All covered. Flock is explicit about exactly what you need to cross the boundaries between its sublanguages.
Now you might be saying: no, this is too flexible. Well, Flock is completely compatible with more strict systems. Write it!
- Or you might think: this is more than I need. Great! Flock is proven to work in pieces. Need more later? It's still compatible!
- Or you might think: this is too high level! How will this be implemented? Well, the plan is to initially implement in Haskell and eventually rebase to a compiler that emits LLVM IR. Flock's semantics fit quite nicely with assembly primitives, but a lot of work is needed to built the connection.
- Or you might think: looks cool, but all my work is in X. Alright, Flock can be implemented in X. We're considering basic implementations in Haskell, C, LLVM IR, Rust, and Ruby.
Every Flock program can be in a different language, only connected where it counts. This means that Flock can be used for markup, markdown, arbitrary DSLs, LaTeX, MAKE, etc. The parsers/lexers/interpreters/optimizers can easily be used with existing engine parts.
For now, the beneficiary of all this is corporate America,
which needs powerful custom software to help manage huge databases on its networks.
Because of the massive hardware requirements for object-oriented software,
it will be years before it becomes practical for small businesses and individual users
(decent performance out of NeXT's software on a 486/Pentium processor, for example,
requires 24 megs of RAM and 200 megs on a hard drive).
Still, in the long run, object-oriented software will vastly simplify the task of writing programs,
eventually making it accessible even to folks without degrees from MIT.
For now, the entities who stand to gain the most from Flock are the biggest ones: Google, Facebook, Twitter, Apple, Github, etc. These are some of the companies who see the largest gains from a wide-scale architectural improvement. Since Flock works well, anywhere from high to low level contexts, its solutions more easily create value at scale (This is because Flock is naturally explicit about dependencies and naturally implicit about specifications, allowing for easier re-use and modification of components.) The nice thing, is that Flock is tiny enough to be minimally implemented practically anywhere. I believe Flock can be minimally embedded in most high-level languages in developer time on the scale of weeks or less. Specifically, with the paper and someone with an understanding of a high-level language on-par with Steven Diehl's "What I wish I knew when learning Haskell", I believe I could pair-program a minimal implementation in matter of hours.
In the long run, my hope is that Flock will vastly simplify the process of writing and reading programs, eventually making this "common sense" easily accessible to anyone with the means to utilize it.
Flock is intended to be better than other languages in a finite number of very specific ways. This by no means makes it a better language. If Flock were a perfect language, it'd already be the universal language of the universe. That, however, is clearly not the case. Flock is small, and it is young. It has a very exciting potential, but it's not yet a ready-to-use general-purpose language.
...it seems to take a very unique combination of technology, talent, business and marketing and luck to make significant change in our industry. It hasn't happened that often.
- technology: Flock, and its developments (culminating currently in the paper)
- talent: My talent, compounded with the talent of all those who develop Flock
- business: I will need to make partnerships in business, which should be enabled by how attractive Flock can be to businesses.
- marketing: Flock is designed to be easy to market. Price: free. Development: well how much and where do you want it? Lukily, Flock is easy to measure, making it easier to price.
- luck: Flock is designed to minimize how much luck I need (or really, how much anyone who uses it needs). Of course, I still need a good deal.
The other interesting thing is that, in general, business tends to be the fueling agent for these changes.
It's simply because they have a lot of money.
They're willing to pay money for things that will save them money or give them new capabilities.
And that's a hard one sometimes, because a lot of the people who are the most creative in this business aren't doing it because they want to help corporate America.
I believe much of it stems from friction between expectations: the manager needs easy-access to technical progress. Easy access may be acheived by:
- Technical management (expensive, time-comsuming, less of a specialty in management, you can't be the best at both.)
- Technical debt: we put it off until later. It's a simple risk-reward tradeoff whose effects can be unpredicable.
- Technical requirements: these could involve style, testing, reviews, development processes, etc (time consuming, require more skilled workers for better results, must be maintained, expensive).
- Technical specialization: If the specialty is focused enough, it's easy to keep track of progress. This doesn't scale easily (since who knows which specialties to pick sometimes?).
All of these take some degree of time, energy, money and planning. Different teams with different requirements make different compromises, but one thing is clear: it's usually nicer to take a little less time, energy, money, and planning to get stuff done.
Well, the nice thing is Flock allows you to be very specific about how you want to adopt it.
There are 4 main models, 16 core model combinations, and 32 core model relations. Flock is as easy to remember as a deck of cards: 52 main facts.
Your choice of which facts to use uniquely determines a sublanguage, and each fact is very customizable.
So, the cost of adopting Flock is pretty much up to you: you can learn and use as much or as little as you like. The smallest languages can be explained in seconds or less.
To make step-function changes, revolutionary changes, it takes that combination of technical acumen and business and marketing —
and a culture that can somehow match up [to] the reason you developed your product
and the reason people will want to buy it.
I have a great respect for incremental improvement,
and I've done that sort of thing in my life,
but I've always been attracted to the more revolutionary changes.
I don't know why. Because they're harder. They're much more stressful emotionally.
And you usually go through a period where everybody tells you that you've completely failed.
_
We tried to sell it in a really cool box, but we learned a very important lesson.
When you ask people to go outside of the mainstream, they take a risk.
So there has to be some important reward for taking that risk or else they won't take it
Flock is not the mainstream (yet) so everyone who uses Flock takes a risk, proportional to how much they use it. The important reward is entering the "Flock family": gaining the benefits of the parts and features of the language used. Ideally, we want people to use all of Flock, so naturally we must make using the different parts easier. Flock's structure gives us a nice place to start for many different kinds of customizations and improvements. Good Flock implementations/compilers are naturally structured along Flock's abstractions, giving us nice places to start for a huge variety of problems.
Minimal Flock implementations are easy. If using Flock to improve an implementation is also easy, implementations will grow naturally. But Flock is built to make it easy to improve itself, so it will naturally grow to fill its resources. This makes it ideal for AI, since Flock reflects its properties on its models and we want AI to grow naturally.
- We want it to have strict limits
- We want it to have easy access to different levels of abstraction and specification
- We want it to easily reuse code
- We want it to be small enough to analyze with AI
- Which it is: you can apply algorithms from brute-force to hill-climbing, simulated-annealing, the simplex method, etc
- All of these may be applied to pretty much everything.
What we learned was that the reward can't be one and a half times better or twice as good.
That's not enough.
The reward has to be like three or four or five times better to take the risk to jump out of the mainstream.
Flock's reward is determined by what's implemented in it.
- For example, the paper has a rather high maximum reward: it's designed to be as rewarding as possible to someone within my technical baliwick, up to my personal resource constraints.
- However, Flock currently has no interpreter/parser/etc implementation, so the only reward that the paper offers is some degree of mastery of this model.
- The idea is that this will naturally filter the readers down to those who have a significant amount of rapport with the paper
- But, at the same time, establish rapport between the paper and stuff I can't easily reach at the moment.
- Thus, the paper alone should be sufficient to bootstrap Flock for those who can and desire to read it.
- Since it's an exciting idea, some will freely share it. One could consider different levels of adoption to be measured by how far from me it's being freely shared, and subsequently used.