Skip to content

Instantly share code, notes, and snippets.

@RobinStamer
Created July 29, 2011 06:07
Show Gist options
  • Select an option

  • Save RobinStamer/1113247 to your computer and use it in GitHub Desktop.

Select an option

Save RobinStamer/1113247 to your computer and use it in GitHub Desktop.
<figaro> Hmm pcboy, your still here?
<figaro> Anyway, im here on regards of the LainOS project
<figaro> im interested in revamping the original project, and i may be able to help with publicity if i were to get the source code and any related documents
<Ling> The original project's only code is just a barely modified version of blackbox that doesn't even compile on a modern distribution.
<figaro> is their any way to rebuild the original OS project?
<Ling> I *might* have it somewhere, but IMO it's worthless, you're actually better off just forking from a more recent commit.
<Ling> Yeah, I managed to compile it. There wasn't anything noticably different with it.
<Ling> Compared to Fluxbox or Openbox.
<figaro> their was 3 downloads on the original OS website
<figaro> all varying in version number
<figaro> they said BETA
<Ling> They say ALPHA
<Ling> http://lainos.sourceforge.net/downloads.php
<figaro> oh, yes sorry
<figaro> i wasnt able to find any modern meany of compileing though
<figaro> the various .txt files were kinda hard to follow
<figaro> i do have a question though
<figaro> were you actually trying to create an alternative OS or just a mod for an already existing OS?
<Ling> My goal was something from scratch
<figaro> majestic was here last night, but he shortly left during the beginning of the conversation
<figaro> have you given up on the project?
<Ling> I've given up on calling it LainOS
<figaro> does it have a new name?
<Ling> The part that I'm most intested at the moment is the file storage
<Ling> https://github.com/RobinStamer/Precipitation
<Ling> There isn't much there so don't get too excited
<figaro> are all of the original team members still working on the project/
<Ling> No
<figaro> and btw pcboy your ever so quiet
<Ling> As I stated earlier before my connection died, both the original and the revival died.
<figaro> why dont you take the LainOS website down then?
<Ling> The revival project's site is down
<pcboy> figaro: Yeah I'm there. But no I don't think I have the time for that project. I have a company and a lot of work to do.
<Ling> I have no control over the original
<figaro> i sent e-mails to the ones given on the original page several days ago, didnt get a rejection ping but yet no reply
<figaro> [email protected], [email protected], [email protected]
<figaro> aliencoffeetable seems to be down though
<pcboy> neovangelist one day quit the channel and never came back.
<pcboy> You will never have a reply.
<pcboy> s/have/get/
<pcboy> And one I say "one day" it was maybe 4 years ago.
<figaro> do you plan to make this public?
<pcboy> What ?
<pcboy> There is nothing.
<figaro> countless ppl are wondering if the Original project still even exists
<pcboy> neovangelist, imho, has never done anything.
<figaro> the last information regarding the OS was from 2004
<pcboy> He was always saying "I'm in need of good developpers because everyone sucks", but he did nothing.
<figaro> majestic? is he still on the project?
<pcboy> I've never seen him.
<Ling> Clearly not.
<figaro> wasnt he here last night?
<pcboy> On the irc only neovangelist was there.
<figaro> oh
<figaro> sry, im getin names mixed up
<figaro> you see im a member of a certain forum consisting of numerous people ranging from Hackers of both white hat and black hat, the forum consists of computing programming gaming anime, and countless more. i started a thread about Serial Experiments Lain and the LainOS project in hopes of learning more, someone told me about this IRC, thats how i found it,
<figaro> many people are their for the love of computing and createing things others are their for the fun
<Ling> I don't think it ever got much traction
<figaro> if you need people for programming then i could referr you to the site
<Ling> If you can rally 1-2 more people I'll bring the revival site back online and shove the code we did on github.com
<Ling> Otherwise I'm not really interested at the moment.
@RobinStamer
Copy link
Copy Markdown
Author

A long issue with modern computer interfaces is the interface is bound to the data manipluation and storage systems. This means that to make a new GUI for a specific system (say a mail client) one has to rewrite all the related handles, like fetching mail, sending mail, filtering, saving drafts, et cetera.
A much more sane method would be to have each program would be composed of components.
The shell provides a user interface and interacts with the API only.
The core is either a daemon/service or has all it's logic held in the API, it doesn't matter.
Then the data would be stored in an open format that is read only by the core.
The shell (the interface) be a simple as possible and use an API to do all the heavy-work, this means that the interface can be scraped and rebuilt with a new design quickly, or the backend can be replaced without having to redo the interface.
Certain programming designs actually suggest something similar (MVC?), this is just an extention of that, allowing even third parties to replace the components.
The three components: data, core and shell, are interchangable (a shell can be repalced by another shell that connects to the same core), but not required to do so.
The only component that must be interchangable with a peer is the shell. Each component provides additional features from this model.

The shell is meant to be a dummy program that does no computation outside the API.
Being mostly featureless it becomes the most modular part of the archetecture.
Because of the design, a special type of shell, dubbed a bot, may interact with the core to do automated tasks.
For example, a communication bot named alert is available to recieve messages from programs that do not use the model, it accepts the message (eg: "the Madison site is no longer accessable") and the destination (eg: Jon Doe) and contacts through the core over every protocol the core has contact information for
the destination (eg: [email protected] and irc://example.net/JDoe).
Anything that needs information from a core is able to get it, for example, the communication core provides a list of contacts, their handle in each namespace (MSN, AIM, EMail, et cetera) and maintains the user's connection required for those various namespace connections, the required information to maintain
them and the incoming information from those connections, which become data that can be polled from the core.

The core is the heart of the system, it must be able to store the information it collects and pass that to the shell for display and manipulation by the user or a bot.
Since the core allows access to all the data it stores, the core must allow another core to syphon its data to enable seamless migrations of the user's data.
Every core must ship a client access API, established required features for certain types of cores may be required, as will certain API calls.
A multiple core environment (where cores all provide the same category of features, but not the same features lists) is still beneficial as a special core may be built to bring all the other cores under a common interface, much as a poxy does today.

The data a core stores and protects is made available to the user for migration and backup purposes. It is generated from user's input (settings), external protocols (eg: MSN, AIM, SMTP, HTTP, FTP, all of which become data sources) and core to core communication.
The actual storage mechanism would depend on the core's features and the data's nature, so either database storage or filesystem storage will be used by the various cores.

In a single-user multiple machine environement, commonly one user has both a desktop and a laptop, this model benefits them as they can work with both machines easily.
With network reachable cores, they can have the shell open on either computer to have a primitive dual-screen environment, letting them see data from multiple shells at once even if the cores are hosted on the same machine.
If each machine has their own core they can use the migration feature to have the same data on both machines so no matter where ey are ey can work on what they need to without having to fuss about manually copying said data.
The user could also move to any workstation, force the cores to use data from a portable hard-drive they own, or migrate from a public core they sent eir data to and use that workstation as eir own, with all of eir configuations in place.

In a multiple user invironment the core may be able to setup shared data pools that multiple people have access to.
When multiple machines are added to that mix, network backups are a simple matter of activating the migration feature, which can be run in the background.
This lets the user work on a local dataset without having to constantly ask the central server for a potentially large download, while still having the peace of mind that comes with having centralized backup systems in place.
For certain cores it may be beneficial to have them on deticated core servers, which workstations can access and work with, when the load gets to high, the migration feature enables live replication making load balancing a simple as setting up round-robin DNS entries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment