Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save ayshen/9266587 to your computer and use it in GitHub Desktop.
Save ayshen/9266587 to your computer and use it in GitHub Desktop.
A brain dump about the current state of human-computer interaction: how we got to where we are, and where I think we ought to go.

On the current state of human-computer interaction

How we got to where we are, and where I think we ought to go

Disclaimer: This is my opinion. I am willing to be proven wrong about anything I say here, including matters of history. Please feel free to discuss.

Table of Contents

  1. A highly selective, abridged history of computer interface design
  2. Three models of human-computer interaction
  3. A tale of two user interfaces
  4. The rise of apps
  5. Interactive dialectic
  6. Daisy, Daisy, Give me your answer do
  7. A proposal
  8. On our obsession with graphic design
  9. Why I think things like the "people-centric UI" have not caught on
  10. On Glass
  11. What we can do

A highly selective, abridged history of computer interface design

In the beginning... was the command line.

The progression of computer interface design naturally started very close to the architecture of computers. Almost every computer in use today, and almost every computer ever built (with the notable exception of Lisp machines, among others), has followed a procedural execution paradigm. Instructions are performed more or less in sequence, with occasional changes of location for flow control and conditional execution.

The method of writing software for computers developed around procedural execution, first with assembly as a more human-readable version of the instruction set architecture used by the computer, then with languages like FORTRAN. And since computers were not marketed as a consumer technology until the 1980s, programming continued to be the primary method of interaction with computers for many years, with a few notable exceptions such as the NLS.

The NLS, or oN-Line System, was a fascinating outlier in the history of humanity's rocky relationship with computers. It was introduced with a research paper authored by Douglas Engelbart et al. and a conference showcasing the system that came to be known as the Mother of All Demos. It presented a significant advancement in the way humans could interact with computers (or alternatively, how computers could augment human intellect and capabilities). Many of the revolutionary technologies that were introduced with it, such as the pointing device (mouse) and hypertext, were carried forward into the currently dominant class of computing software. But the key difference that failed to take hold was more philosophical, lost along the journey from SRI International to the Xerox Palo Alto Research Center. The idea that information could be infinitely structured and arbitrarily correlated was discarded along with the arcane command chords (and Engelbart himself) and replaced with a more comforting analogy: self-contained documents, filed into directories.

This is not to say that the folder-file analogy was undesirable or inadequate. It was certainly a contributing factor to the appeal of the Xerox STAR over the standard computers of the time, because it couched its data structure in terms of things with which its users would be familiar. Neither was discrete storage an unfamiliar concept at the time; both the DOS and Multics/UNIX predecessors to the modern set of operating systems represented digital assets in a directory-file structure. The use of graphical user interfaces instead of text-based interfaces to represent this hierarchy was the major factor that led to widespread adoption of the Window-Icon-Menu-Pointer interface, through Apple Computer with the Macintosh, and through Microsoft with Windows. The overwhelming popularity and ease of use of such an interface quickly made it the de facto standard for what we now consider "desktop computing".

Three models of human-computer interaction

Almost as soon as the graphical user interface escaped the Xerox Palo Alto Research Center (PARC) there was already evidence of a split in the way people thought computers ought to be used. When interaction was predominantly text-based (and questionably on a screen at all), there was one model of working with computers: the command-line interface, or shell. The usage of shells can be afforded another article entirely, but can be simplified for the sake of making the point of this section as follows:

  • Digital assets are stored in a hierarchy of directories and compartmentalized files.
  • Basic operations are performed by commanding the computer: specifying an executable file to be loaded, modifying the characteristics of its operation with flags or switches, and specifying files or streams of data upon which to operate.
  • Complex operations can be performed by redirecting the output of one basic operation into the input of another (piping).

The last point bears additional explanation. The basic operations exposed through a shell, such as copying or reading the content of files, were deliberately constructed to perform exactly one kind of action. The key to the great flexibility of shells, especially in the UNIX family of shells which bundled a suite of utilities for manipulating text streams, was the pipe (|). This allowed users to compose basic operations to form complex ones, in the same way that a one would compose functions in algebra to make new functions. Equivalently, a user would apply successive refinements to a stream of data: "take this data, do this (1) to it, then do this (2) to the result of that, then..."

When software was freed of its text- and paper-based output methods by the use of screens, limitless possibilities for representing the same data became available. Users became separated into two relatively loose groups correlating with two operating systems: the Macintosh OS and Windows. Neither represented the ability to work with information in the same way that a shell did previously, but differences in the key user interface components hinted at a parting of the ways.

Windows 3.1 was a faithful representation of the directory-file paradigm, almost to a fault. The basic usage of the operating system consisted of a root window from which all other windows were derived; closing the root window would shut down the computer. Each window contained either a listing of the items that were located at the represented location in the data hierarchy or a view for manipulating data. The preferred method of interaction would therefore be locating the desired asset in the hierarchy and "opening" it with the appropriate manipulation software.

In contrast, the Macintosh OS started with a functional element that remains unchanged to this day: a dock containing a set of icons, each representing one class of functions related to working with a single type of digital asset. This is what we now know as the application model of computing. One would enter the context of an application, preparing to work with a particular kind of asset, then locate the appropriate asset by asking the application to "open" it. The directory-file structure was still usable via Finder, but this was a thin abstraction over the fact that Finder was an application context designed for the express purpose of providing this interface (viz. for working with directories as its single content type).

Thus we can observe the emergence of three ways of working with digital assets that are in common use today, for which I shall use these names:

  1. The Functional paradigm: discrete, specialized operations are composed upon each other to achieve the desired effect.
  2. The Object-oriented paradigm: semantically organized content is operated upon by the appropriate software.
  3. The Application paradigm: contexts specialized for working with single types of content.

The relationship between the latter two paradigms has been well explored over the last twenty-five years. Neither has approached the theoretical flexibility afforded by the functional paradigm, but each has evolved in response to the others, as we shall see below.

A tale of two user interfaces

The champion of the application paradigm on the desktop, the Macintosh OS and OS X, underwent relatively little change throughout its existence. It retained its basic appearance and function through several visual refreshes corresponding with the introduction of improved hardware. The most prominent change to date has been the introduction of Launchpad with OS X 10.8, codenamed "Lion", providing a dedicated modal view for finding applications. Also of note is Spotlight, a semantic search box, whose impact we shall come to shortly.

Windows, on the other hand, suffered continual changes to its user interface to meet consumer expectations and to introduce new functionality. Windows 95, one of the first versions of the operating system to feature the NT kernel and graphical stack, introduced the famous Start menu. It provided both an application-style folder called Programs and a set of shortcuts to common starting points in the filesystem hierarchy. The Start menu continued to receive refinement throughout Windows 98, 2000, CE/ME, Vista, and 7, most notably with the introduction of the semantically aware type-to-launch text box. Windows Vista and Server 2008 also modified the function of the system tray from multitasking into the taskbar, which is analogous to the OS X dock.

Semantic search is an addition common to both interfaces. Acknowledging the difficulty of finding things on increasingly cluttered computers, both operating systems included this feature to provide basic out-of-box awareness of the underlying directory-file structure, the monolithic applications that are installed, and to some extent some basic user queries. Spotlight has generally been more full-featured out of the box with the ability to define words and perform simple mathematical operations, but both operating systems have seen independent developers building custom launcher software for even more semantic awareness. This is a phenomenon which will be addressed below.

The two major operating systems were not the only players at this game. Unbeknownst to much of the popular consumer technology industry, Linux drew a small but forceful following starting in 1991. Groups of independent developers came together to build window managers in every flavor imaginable, from improved imitations of the OS X dock and system bar to tweaked trays mimicking the Windows window switcher, and even to no-mouse-required interfaces like ratpoison, which tiled windows with key chords. This naturally became a fertile testing ground for tweaks to the two major interfaces, and also for building extremely customizable windowing systems that could match their developers' more esoteric needs. It is impossible to perform an exhaustive characterization of the features that such development created, but the communities behind these window managers and desktop environments have managed to produce equivalents for every change that Apple and Microsoft made, and even more.

The rise of apps

Unlike the object-oriented directory-and-file system that controlled desktop-class computing, the application paradigm had a long history in the mobile space. Personal digital assistants and feature phones were designed with what amounted to an application drawer, not because it was the best interface for using these devices, but because of the limitations of the hardware that could be manufactured at the time: resistance-sensing screens that needed a special stylus to register input; a miniaturized keyboard and a trackball; or else a T9 keypad. Capacitive sensors existed, but were largely considered a curiosity rather than a critical component of technology of the future.

All of this changed in January 2007. In a masterful move of cross-domain integration unforeseen by almost everyone, Apple, in a nail-bitingly fragile press conference, revealed an idea for a combination of portable music player, mobile phone, and browser that redefined the direction and pace of the technology industry.

The presentation of the iPhone took the technology industry by storm. Jobs had managed to put together a demo not just of the technology that existed, but also of the technology that he wanted to sell in the future. And so compelling was the allure of manipulating digital assets by touch that consumers were driven almost overnight to a frenzied desire for such a product to exist.

Upon its release, consumers soon became keenly aware of the extreme extent to which their computing sensibilities, so coddled by the comparative power and flexibility of their desktop-class computers, had been restricted on the iPhone. Structured as a two-dimensional version of the OS X dock for more effective use of the small screen, Springboard felt incomplete--it was designed to launch contexts for working with data, but the only contexts available at the time of launch were the ones that were loaded onto iPhone: a combination dialer and voicemail manager; a music player; a browser; and a few other things. Recognizing this deficiency, Apple released toolchains for developing native applications for iPhone, and the rapid descent into madness began.

Meanwhile, in response to popular demand for technology similar to the iPhone, Google accelerated its release plan for Android and pushed its first versions to market soon thereafter. Its base similarity in function and appearance drew many skeptical opinions as to whether it was a shameless clone of the iPhone interface, but its relative friendliness to customization by original equipment manufacturers quickly drove it into equal competitive ground.

Microsoft re-entered the game in which it had previously dabbled under Windows Mobile with Windows Phone 7 in 2010. Adapting an old accessibility interface design language for the restricted screen size, Microsoft tried to put its own flavor on the swelling body of software that was by that point emerging for iOS and Android, with mixed success.

Interactive dialectic

Lost among the clamor for capacitive-touch mobile devices and software designed for them was a fundamental shift in the acceptable way to work with computers. To see this in action, we must return to the filesystem manager to observe what has changed.

File managers are, at their core, a graphical representation of the directory-file hierarchy that underpins almost all modern computing devices. Directories represent semantic contexts, categorizing digital assets by their relevance to each other. For example, when working on a presentation, a user might have the presentation slides, image files for each figure in the presentation, and perhaps a rich text document for scripting the speech that would accompany the presentation slides. These would be stored together in an appropriately named directory, thus:

work/
    presentations/
        19900101_lisp_machines/
            fig1.jpg
            fig2.png
            fig3.png
            presentation.pdf
            script.odt
        19900615_fortran/
            presentation.pdf
    forms/
        reimbursement.pdf

For reasons mostly unknown to the average consumer, most likely as an attempt to simplify the nature of mobile computing, the application-paradigm interface popularized by Apple and Google and subsequently adopted by every group with a stake in the mobile space eschewed the directory-file structure. It would have seemed inappropriate to bother users of mobile devices with such things as a filesystem, or to burden them with the task of deciding which files should be grouped together.

Applications appeared to be the most appropriate way to expose the function of a computer when screen size was limited. Instead of drilling around in lists of files, it made much more sense to devote the entire screen to the context of managing one kind of content, such as music, calendar events, or email. The work that it was possible to accomplish on a mobile device became compartmentalized and even restricted by architecture decisions that were sound at inception and found to be poor in hindsight, at the cost of the ability to organize semantically, to associate different kinds of content, and to move them around to where they were needed and use them as the task at hand required.

And so sufficiently complex were user demands for such a system, and the cost of redesigning such a system to support these rapidly changing demands, that it became easier for users to stop concerning themselves with what they had given up to be on the go, and to settle for doing certain things with their phones, others with tablets, still others with their televisions, and everything else with a desktop computer. And since these device classes had never before been thought to work together, many proprietary solutions, focused around rich media consumption, sprang up to accommodate the insatiable demand for abjection. This led to the creation of content platforms rather than the Internet of Things, a balkanization of the global network into relatively isolated superpowers instead of a space of democratized interconnectivity. Within these walled gardens, users could be locked into the services provided by the manufacturers of their devices and milked for their discretionary funds.

This model of monetizing mobile technology soon exposed a weakness. The application model was a fine way to organize content that already existed within the user's grasp. But what of content that the user did not have on hand? What about the weather, for example, or movies of which the user did not know? Recognizing that the user demanded simple paths for discovery and notification, companies began to invest in a fourth paradigm of computing, one that would try to replicate semantic awareness in a world of increasingly fragmented digital lives.

Daisy, Daisy, Give me your answer do

A few entities such as Nuance and Wolfram, accompanied by a small set of startups that were mostly quickly acquired by giants like Apple and Google, now accelerated work on digital assistant software. This represented a further abstraction of computing from discrete, semantically organized assets; through islands of rich consumption curated by content type; and into the realm of semantic digital entities. Instead of working with the names of files or a blob of music, one would now refer to entities such as "the weather" or "mom" and be able to get a meaningful response.

This was not a new idea. The idea of machines that could understand natural language queries was commonplace throughout popular culture, with such prominent examples as HAL 9000 from 2001: A Space Odyssey, the thorough exploration of the rights, responsibilities, and behavior of intelligent machines by Isaac Asimov, and the computers that run the starships in Star Trek. The development of services such as Google Now and Siri represented the first large-scale effort in the modern context to achieve this lofty goal.

Ideally the goal would be quite desirable as a complement to both the object-oriented desktop-class interface and the application interface. By understanding context on the level of the users' intelligence, it would be helpful without demanding attention, present when commanded, and imperceptible when not needed. If implemented correctly, it could even help to achieve the long-lost goal set forth by Douglas Engelbart in the Mother of All Demos: to augment the human intellect with technology.

Unfortunately, all currently available digital assistants are constrained by the platforms on which they are implemented, and by the companies that develop them. Siri, for example, is limited to devices sold by Apple and running Apple's proprietary mobile operating system, and is furthermore run entirely on Apple servers. Any semblance of intelligence or wit is in fact a hard-coded relation that is executed in an Apple data center and sent back to the user's client. Neither is Google Now any better. Despite being available across platforms, it is still controlled largely by Google servers. Its understanding is therefore limited to what Google deems important enough to support.

A proposal

If we are serious about building sufficiently intelligent digital assistant software, then we must recognize that this is an unsustainable business model for creating one. By keeping their intelligence engines under proprietary control, companies like Apple, Google, Microsoft, and Wolfram are necessarily undertaking the insurmountable task of characterizing as much of the world as possible, in order to satisfy the needs of as many people as they can. Speculatively, no single entity has the power to achieve this, not even Google, which directs the vast majority of the traffic of Internet users.

How, then, can we overcome this limitation? First we must declare that users have the right to be assisted with the software that is best for them. "Best" means several things in context:

Users have the right:

  • to use software that meets their needs.
  • to use software that does what they want and does it well.
  • to use software that respects them as people, their right to communicate and express themselves without restriction, and their right to privacy and to be secure in their personal lives.
  • to choose the software that meets these requirements.
  • to produce the software that meets these requirements if no such software is available.

A digital assistant that meets these requirements, particularly the requirement that users should be able to produce the software that meets their needs, would necessarily be:

  • Able to function with no Internet connection
  • Licensed to respect the rights of the user
  • Modular and extensible
  • Open-source, with a license that respects users' software rights

I strongly believe that this goal is worthwhile, not only as a project to drive the quality and availability of digital assistant software, but also as a way to free ourselves from proprietary control in this new fourth model of computing. If we want to have software that tells us which bus to take to get to work on time, then we have the right to have that software without waiting for a company to care enough about the bus system to implement it for us, and we have the right to make that software in order to have it.

This is my proposal to the community: make, or find and contribute to, a project that aims to build a digital assistant that is unencumbered by proprietary interest, that can be extended to meet any user's needs, and that can approach more closely to a truly intelligent software system than any other system that currently exists.

On our obsession with graphic design

We now turn to another phenomenon that has shown up since the introduction of the iPhone. As consumers and developers, we have become increasingly obsessed with graphic design: the color and shape of the border of a button; the coefficient of friction that makes the ideal animation for an element; which items to put in a navigation drawer; which gesture to use to trigger an action... The list goes on.

Our obsession is not a failure of our imagination, or our talents, in any sense. We are highly influenced by visual cues and small rewards. Every once in a while, we discover something else that makes us feel good, and we milk it for all it's worth. For example, despite having no visual indicator, swipe gestures are satisfying to perform because they imitate physically flinging or dragging objects.

In a similar fashion, visual cues are highly relevant to the way that we use our current mobile devices. The restrictive screen real estate forces us to come up with ways to communicate what will happen when a user touches a button, or swipes from the left edge of the screen. Things like highlighting, animation, and haptic feedback can all support a good interface.

What irritates me is that I feel that we have begun to value graphical and gestural tweaks, these small changes in the way we slide our fingers around on our little glass rectangles, over the ability to actually do anything meaningful. We review applications on the basis of their beauty. We market applications with the gesture-based interface as a key selling point. We complain about the color of the switches not matching the color of the system icons. The interface has taken center stage instead of the ease, flexibility, and power of the actions that are performed; function now follows form.

Some of this activity is justified. Deceptive user interface practices, such as misleading a user into thinking that one must give a five-star review, should be exposed without mercy. But much of what we now focus on just feels a lot like an exercise in rearranging the deck chairs on the Titanic. Why do we care that the deck chairs are now a slightly different shade of light blue, or that they can now recline an extra 2.33194 degrees to the left? We are missing the icebergs for the massage machines.

There have been good advances toward addressing this line of thought. Returning briefly to the near-simultaneous introduction of semantic search in both OS X and Windows, we find that there is a small but vocal niche for software that cuts through the relatively slow and cumbersome graphical interfaces, most visibly through the use of custom launchers like Quicksilver. Extrapolating, we find that despite the fact that existing operating system interfaces and their users have grown to meet each other's needs reasonably well, there is demand for software that can address complex or powerful needs, even if it is different from any computing paradigm that is commonly used. This suggests that we can start our search for a new way forward by rethinking parts of how we think when we use computers, and building software to match, assist, and augment our thought processes.

Why I think things like the "people-centric UI" have not caught on

One of the most striking attempts to rethink how users thought about interacting with computers, and continues to receive effort from Microsoft, is the concept of a "people-centric UI". Nevertheless it has never caught massive interest among the consuming public, which makes it an ideal candidate for evaluating the effectiveness of redesigning interactions that users take for granted as a way of moving forward.

The "modern" UI and design language that was brought to broad consumer usage by Windows Phone 7 presented a significant departure from the application model. Information was presented directly on the Start screen, much like the use of widgets in the default Android launchers, but did not necessarily link deeply into the content to make usage any easier. The Start screen ended up being a hybrid of the iOS and Android UI models, with limited shortcut-style function but a commendable emphasis on providing information without needing to drill downward in application structure.

The key feature of note, however, was the prominent development and featuring of hubs, or views centered around real-world entity types like people. This was both a much-needed modernization of such items as rolodexes and business cards and a fundamental inversion of how people think about using a phone. Selecting a person in the hub would show a unified view with aggregated contact information extracted from various communication services, such as phone numbers, email addresses, etc. These items would then let the user drop directly into the respective service for contacting the person.

To understand why this was an inversion, one must consider the structure of language and thought in relation to how a user commands interaction with technology. Most European languages, for example, structure their sentences in subject-verb-object format, like this:

  verb          dative
  ----          --------
I sent a letter to Alice.
|      --------
|      object
subject

  verb             dative
  -----            -------
_ Envio un mensaje a Alice.
|       ----------
|       object
|
(conjugated with subject "yo")

A similar structure is observed when giving commands:

  verb          dative
  ----          ------
_ Send a letter to Bob.
|      --------
|      object
|
(implied subject "you")

  verb             dative
  -----            -----
_ Envie un mensaje a Bob.
|       ----------
|       object
|
(conjugated as command with subject "Usted")

Thus emphasis is placed upon the verb--the action that is requested of the entity that is commanded. This is why I believe the application model intuitively makes sense to many people: we think of applications not in terms of the fact that they curate our activity, but rather as a proxy for an action that we would like to take.

The hub model places the indirect object, the noun located in the dative preposition marked in the examples above, before the verb: a person is selected, then the method of contact is requested. This is much like saying "To Bob send a letter", which is (questionably) grammatically correct but logically jarring in European languages.

Unfortunately the language model breaks down when considering non-European languages. For example, a contrived (Mandarin) Chinese translation of the example above might look like this:

Inflections are indicated by a number suffix after the transliteration.

  dative        object
  --------      -----
_ gei3 Bob xie3 xing4.
|          ----
|          verb
|
(implied subject ni3 or ning2)

But it is equally correct to use the following construction due to the isolative nature of the Chinese language:

  verb       dative
  ----       --------
_ xie3 xing4 gei3 Bob.
|      -----
|      object
|
(implied subject ni3 or ning2)

To the model's credit, however, much of the research and development effort on modern computer interfaces is conducted in English, in part because the field is driven most visibly by Americans, and also in part because almost all programming languages in common use are designed by and for English-speaking developers.

On Glass

Considering the topic of command-based interfaces, another prominent experiment comes to mind. The basic Glass interface presents a startling resemblance to both the launcher model and the sentence diagrams shown above. A basic interaction might look something like this:

OK glass, send a message to Alice: would you like to get lunch together today question mark.

This command leverages the subject-verb-object structure of the English language to resolve a user's intent into an action.

Another interaction example illustrates yet another pattern:

  • "OK glass, take a picture."
  • Share with Add a Cat to That, receive picture with cat added.
  • Share enhanced picture with close friends.

There are two key features of this interaction to note. Firstly, we observe that the picture passes through an operation, and the output of that operation is then given to another operation. This is none other than the composition of actions that was, until now, only available to users of a command line. Given the right set of software that supports chaining in this way, the ability to compose actions makes Glass a much more exciting target for interaction research.

Second, we see that the photo comes first. In contrast, consider the typical interaction flow for Instagram:

  • Find and start Instagram from the application launcher.
  • Use Instagram's camera view to take a square picture.
  • Use Instagram's filter picker view to pick a way to degrade the quality of said square picture.
  • Use Instagram's sharing view to add a comment and post the picture to Facebook.

Notice that the entire experience has been curated by Instagram, which comes before the content and remains the principal mediator of the action. The equivalent transaction on Glass sees the picture come into being first and then get passed to Instagram at the behest of the user. Therefore Glass inverts the application model of handling content in the interest of keeping the user in the real-world context. Whether it actually succeeds is debatable, but the reversal of content and operation is commendable as an experiment in rethinking how we use software services.

What we can do

So what remains to be done?

If you are a developer: First and foremost, stop chasing fads. Profit motive is a poor indicator of success. Anyone can claim to be making the next Facebook, the next Instagram, the next Whatsapp, or the next Big Thing, but we've stuffed our users so full of bad software that there just isn't enough space for these kinds of things to stick anymore. Let's dream of what we would like our computers to do to make our lives easier and build toward that dream. Let's find out what we, and users like us, need in our lives, and solve the problems that make our lives better. You know how you always want everything to just work, and to work together? You have the power to make your services work with each other and with other people's services. Design them to be composed, to do one thing and do it well, and to respect the right of your users (and of you as a user) to use the software that is best for them.

If you are someone with influence over a major technology company: recognize that technology fads are just that. Develop foresight for your products and design them to be ready for the future, not just to make money today. Your company has unparalleled research potential (except for maybe DARPA, but we're talking about consumer technology): use it to find out what the future is like, and bring it to the table. At some point, you will need to stop riding on your very long coattails, get out of the mud, and keep walking. What better time than now?

If you are a venture capitalist: start being pickier about the people you fund. Find the people who have the best shot at building the future that you would like to see, and help them develop their visions into reality. Don't be afraid to invest in what you believe is right. The end profit goal is always nice, but it's the journey that counts.

For everyone else: all I ask is that you understand how your technology got to where it is, and to recognize what you need from your phone, tablet, desktop, laptop, television, car, refrigerator, etc. We developers are terrible at understanding what normal people like you need. Please don't tell us that you need a different gesture to do X, or that Y should look like Z; it doesn't change the fact that what we've built for you is a shambling mess of software that doesn't actually do what you need it to do. Think about why you use what you use, why you need it, and why you are or are not happy with what it lets you do. If you know what you really need, then we can build software that does the right things for you. Help us help you.


Special thanks to:

  • Jeff Atwood, for motivating me to finally publish this.
  • Scott Hanselman, for describing the problems with people-centric UI.
  • Everyone I spoke to about these ideas, for contributing their insights to this mess of words and ASCII art.
@mertyildiran
Copy link

@ayshen check out our project https://github.com/DragonComputer/Dragonfire

Maybe you would like to contribute...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment