[philiptellis] /bb|[^b]{2}/
Never stop Grokking


Showing posts with label thoughts. Show all posts
Showing posts with label thoughts. Show all posts

Monday, November 22, 2010

Stream of Collaboration and the Unified InBox

Back in 2003, I'd published a report on the state of computer mediated collaboration at the time. The report did not contain any original research, but was a list of references to other research on the topic. I was also working on ayttm and doing some work on automated project management at the time, which led to my talks on Fallback Messaging and Project Management with Bugzilla, CVS and mailing lists.

The state of technology has changed a lot over the years, and we're getting closer to the Unified Message InBox. The idea has been mulling in my mind for a while, and I've prototyped various implementations as a subset of what I call a stream of collaboration.

Communication

As technologically aware humans, we communicate in a variety of ways. Face-to-face, through the grapevine, handwritten letters and post-it notes, instant messaging, SMS, telephone calls, email, discussion boards, twitter, blogs, smoke signals, morse code using signalling lights, semaphore flags and more. Some of us prefer one form over another, and some of us will completely boycott a particular form of communication purely on principle, some of us won't use a form of communication because simpler methods exist and some of us will use a particular form of communication purely because it isn't the simplest one available. That's what makes each of us uniquely human. It also pushes us into groups and cliques that may or may not intersect. Who among you has a separate group of friends on twitter and facebook even though the two media are not vastly different? Do you also belong to a HAM radio club and a book reading group?

Now some forms of communication are well suited to automated archiving while others might end up being a game of Chinese whispers. Each form of communication also results in a different signal to noise ratio. Depending on the context of the conversation, accuracy and efficiency may or may not be a concern.

Degrees of Communication

Collaboration and Cooperation
Collaboration between a group of people involves communication, often requires archiving of all communications, and a high signal to noise ratio. Discussion lists, IRC logs, wikis, whiteboards, video/audio conferences, project trackers, bug trackers, source control commits, and sometimes email all provide good archiving capabilities. With proper time-stamping of each interaction, the archiving service can interleave events showing the exact chronological order of decisions being made. A Bayesian Filter, similar to the ones used for classifying spam can be used on a per topic basis to hide (but not remove) off-topic sections in an attempt to increase the signal to noise ratio. Once archived, even synchronous communication turns asynchronous [1]. Readers can later comment on sections of a communications log.

While some of the tools mentioned above are aimed at technical collaboration, many of them may also be used effectively for collaboration and support in a non-technical context [2,3] where immediate dissemination of information that can be archived and later used for reference is important.
Social Interaction
A form of communication that is more tolerant of a low signal to noise ratio, and in many cases does not require archival is social interaction. For example, at a party to watch a ball game, conversation may range from the actual events of the game to something completely irrelevant, and they're all socially acceptable in that context. Similarly, the corridor conversations at a conference may only be partially relevant to the subject matter of the conference. How does this translate to a scenario where people are geographically separated and need to communicate electronically rather than face to face?

Social television experiences [4], Co-browsing, LAN parties and digital backchannels [5] are examples where individuals communicate online while simultaneously engaging in a common task at geographically disparate locations. Computer mediated collaboration allows these individuals the ability to come close to the full party experience.
Casual Conversations
With more people moving their lives online [6,10], we're at a point where the volume of casual electronic conversations far exceeds that of technical collaboration. Fallback messaging is a good starting point to tie different forms of communication into a single thread, but it ignores current reality.

For starters, the fallback messaging idea assumes that users would use the same interface to communicate synchronously and asynchronously. In reality people use different methods for each. Asynchronous communication offers the ability, and often creates the necessity of longer messages and a larger amount of time devoted to communicating the message [7]. Should I say "Dear ..." or do I lead in with "Hi"? Should my background be pink or yellow? Do I want hearts, unicorns, birthday cakes or plain white in the background? Do I sign off with "Sincerely", "Regards", "Cheers" or "ttyl"? A different interface for each mode of communication makes it possible to customise the interface for the most common uses of that mode.

Fallback messaging was initially only applied to conversation between two persons, however a lot of casual communication happens between groups with the conversation sometimes forking into side-band conversations between a few (possibly two) members of the group and then merging back in to the parent conversation [7]. Some times the child conversation is made public to the group and some times it isn't. A messaging system must take this into consideration.

Enabling the Stream of Collaboration

Borrowed Ideas
The main idea behind fallback messaging was that service providers, communication protocols and user accounts were an implementation detail and the user should not be concerned with them. Users should only need to think of who they're communicating with, identifying them in any way they find comfortable (for example, using their names, nicknames, an avatar or favourite colour). The service needs to be able to determine based on context and availability, which messaging protocol to use.

Another idea that comes out of the original SMTP specification, is the now obsolete SOML [8] command. The SOML (Send Or MaiL) command would check to see if the recipient of the message was online when the message was sent, and if so, would echo the message to the user's terminal. If the user wasn't online at the time, it would append the message to their mailbox instead. Services like Yahoo! Messenger offer offline messaging capabilities that will hold a message on the server until the user comes online at which point it is delivered to them.

The problem with both these approaches is that they expect the user to create a message appropriate for the underlying service rather than the other way around. An entire email message in the case of SOML or a short text message in the case of Yahoo! Messenger. What we really need is a service that can decide based on what the user does, what kind of message needs to be sent, and how that message should be presented.

Facebook, GMail and Yahoo! Mail all offer a service where instant messaging and mail style messages can be sent from the same page, but with a different interface for each. Additionally, GMail provides the ability to see archived email messages and chat messages in the same context.
Proposed Interface
A messaging system is most useful to the user if it acts as a single point for them to reference and act on all past conversations, and provides an easy gateway to initiating new ones. The read interface must list all past conversations chronologically, with the ability to filter them based on topic and other participants in the conversation. It should be able to show where a conversation forked into separate threads, some of which may have been private, but all of which involved the user, and where these threads merged back into the primary stream.

The interface should include all kinds of communication including email, instant messages, co-browsing sessions, whiteboards, IRC, and anything else. Integrating with a service such as Google Voice, Skype or other VoIP solutions also allows it to tie in telephone conversations. Twitter and facebook notifications would tie in to this timeline as well.

The system should not rely only on message headers and meta-information, but also on message content to determine the topic and participants in a conversation [9]. Some participants, for example, may be involved indirectly, but not be privy to the details of the conversation, however it is useful to the reader to be able to filter messages with these details. Content analysis can also be used to identify messages as common Internet memes and tag them accordingly, possibly providing external links to more detailed information on the topic. Lastly, as has been proposed with fallback messaging, the system needs to aggregate all accounts of a contact across services into a single user defined identifier. It should still be possible for the user to identify which service is being used, but this should be done using colours, icons or similar markers.

Where are we today?

All major electronic communication services already provide some level of API acess to their messaging systems [11,12,13,14,15,16,17]. Services like Tweetdeck provide a single interface to multiple short message services (like twitter, identi.ca and the facebook wall), and Threadsy is supposed to unify your online social experience. Facebook seeks to unify email, IM, texting and Facebook messages through their own messaging service [18] which, like fallback messaging, is supposed to abstract out user accounts so that all you see is your contacts. I haven't seen the new messaging service yet, so I don't know if it also integrates with things like GMail, twitter, Yahoo! and other services that compete with Facebook [19]. If it does, that would be pretty cool. If it doesn't, there's an opportunity to build it. There are still other services that need to be tied in to enable full collaboration, but it doesn't seem too far away.

References

  1. Terry Jones. 2010. Dancing out of time: Thoughts on asynchronous communication. In O'Reilly Radar, October 26 2010. http://radar.oreilly.com/2010/10/dancing-out-of-time-thoughts-o.html
  2. Leysia Palen and Sarah Vieweg. 2008. The emergence of online widescale interaction in unexpected events: assistance, alliance & retreat. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (CSCW '08). ACM, New York, NY, USA, 117-126.
  3. Sutton, J., Palen, L., & Shklovski, I. 2008. Back-Channels on the Front Lines: Emerging Use of Social Media in the 2007 Southern California Wildfires. In Proceedings of the Conference on Information Systems for Crisis Response and Management (ISCRAM).
  4. Crysta Metcalf, Gunnar Harboe, Joe Tullio, Noel Massey, Guy Romano, Elaine M. Huang, and Frank Bentley. 2008. Examining presence and lightweight messaging in a social television experience. ACM Trans. Multimedia Comput. Commun. Appl. 4, 4, Article 27 (November 2008), 16 pages.
  5. Joseph F. McCarthy, danah boyd, Elizabeth F. Churchill, William G. Griswold, Elizabeth Lawley, and Melora Zaner. 2004. Digital backchannels in shared physical spaces: attention, intention and contention. In Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW '04). ACM, New York, NY, USA, 550-553.
  6. Donna L. Hoffman, Thomas P. Novak, and Alladi Venkatesh. 2004. Has the Internet become indispensable?. Commun. ACM 47, 7 (July 2004), 37-42. http://cacm.acm.org/magazines/2004/7/6471-has-the-internet-become-indispensable
  7. Rebecca E. Grinter and Leysia Palen. 2002. Instant messaging in teen life. In Proceedings of the 2002 ACM conference on Computer supported cooperative work (CSCW '02). ACM, New York, NY, USA, 21-30.
  8. Jonathan Postel. 1982. RFC 821: Simple Mail Transfer Protocol. IETF Network Working Group RFC Draft. http://www.ietf.org/rfc/rfc0821.txt, (August 1982).
  9. Dong Zhang; Gatica-Perez, D.; Roy, D.; Bengio, S.; "Modeling Interactions from Email Communication," Multimedia and Expo, 2006 IEEE International Conference on , vol., no., pp.2037-2040, 9-12 July 2006
  10. Rana Tassabehji and Maria Vakola. 2005. Business email: the killer impact. Commun. ACM 48, 11 (November 2005), 64-70. http://cacm.acm.org/magazines/2005/11/6081-business-email
  11. Yahoo! Mail API. http://developer.yahoo.com/mail/
  12. Yahoo! Messenger SDK. http://developer.yahoo.com/messenger/
  13. Twitter API. http://dev.twitter.com/doc
  14. Facebook Developer API. http://developers.facebook.com/docs/
  15. GMail API: http://code.google.com/apis/gmail/
  16. Google Voice APIs: http://thatsmith.com/2009/03/google-voice-add-on-for-firefox, http://code.google.com/p/pygooglevoice/
  17. Jabber protocol (Google Chat & Facebook Chat): http://xmpp.org/xmpp-protocols/
  18. MG Siegler. 2010. Facebook's Modern Messaging System: Seamless, History and a Social Inbox. Techcrunch, Nov 15 2010. http://techcrunch.com/2010/11/15/facebook-messaging/
  19. Alexia Tsotsis. 2010. Between Gmail, Twitter and now Facebook There is no Universal Inbox Yet. Techcrunch, Nov 22 2010. http://techcrunch.com/2010/11/21/facebook-messages-is-people/

Sunday, January 29, 2006

Progressive Enhancement via μMVC - I

The web today is like a huge buzzword bingo game. There's so much flying around that it's hard to stay in touch unless you're in it day in and day out. That's not something that old school engineers like me find easy. I'm far more comfortable staring at my editor, hacking code to interact with a database or some hardware. Interacting with users is tough. Doing it with sound engineering principles is even tougher.

I'm going to take a deep breath now and mention all the web buzzwords that I can think of and somehow fit them into this article.

AJAX, RIA, JSON, XML, XSLT, Progressive Enhancement, Unobtrusiveness, Graceful Degradation, LSM, Accessibility.

Definitions

Let's get a few definitions in there:
AJAX
A generic term for the practice of asynchronously exchanging data between the browser and server without affecting browsing history. AJAX often results in inline editing of page components on the client side.
RIA
Rich Internet Applications - Web apps built to feel like desktop applications. Most often built using AJAX methods and other funky user interactions
JSON
A popular new data interchange language used to exchange data between languages. Extremely useful for compactly sending data from a server side script to a Javascript function on the client
XML
A common, but verbose and slow to parse data interchange/encapsulation format, used to exchange data between client and server.
XSLT
XSL Transformations - transform XML to something else (most likely HTML) using rules. Can be executed on either client or server depending on capabilities
Progressive Enhancement
The practice of first building core functionality and then progressively adding enhancements to improve usability, performance and functionality
Unobtrusiveness
The practice of adding a progressive enhancement without touching existing code
Graceful Degradation
The ability of an application to gracefully retain usability when used on devices that do not support all required features, if necessary by degrading look and feel. Graceful Degradation follows from Progressive Enhancement
LSM
Layered Semantic Markup - The practice of building an application in layers. At the lowest layer is data encapsulated in semantic markup, ie, data marked up with meaning. Higher layers add style and usability enhancements. LSM enables Progressive Enhancement and Graceful Degradation.
Accessibility
The ability of an application to be accessed by all users and devices regardless of abilities or capabilities.
See Also: Progressive Enhancement at Wikipedia, Progressive Enhancement from the guy who coined the term, Progressive Enhancement from Jeremy Keith, Ajax, Graceful Degradation, Layered Semantic Markup, JSON

We'll get down to what this article is about, but first let me add my take on LSM.

LSM's layers

While LSM suggests development in layers, it doesn't specify what those layers should be. Traditionally, developers have looked at three layers: Semantic Markup, Semantic CSS and Javascript. I'd like to take this one level further.

The way I see it, we have 4 (or 5) layers.

Layers 1 and 2 are semantic markup (HTML) and semantic classes (CSS). Layer 3 in my opinion should be restricted to unobtrusive javascript added for UI enhancements. This would include drag and drop, hidden controls, and client side form validation, but no server communication.

Layer 4 adds the AJAX capability, however, just like Layer 3 does not absolve the back end from validating data, layer 4 does not absolve the back end from producing structured data.

Right down at the bottom is syncrhonous, stateless HTTP (Layer 0)

And now, back to our show.

Web application frameworks and MVC

There's been a lot of work in recent times to build web application development frameworks that make it easy for a developer to add AJAX methods to his app. Tools like Ruby on Rails, Django, Dojo and others do this for the user, and build on time tested design patterns.

For a long while web application frameworks have implemented the MVC pattern. Current frameworks merely extend it to move some parts of the view and controller to the client side instead of doing it all server side.

See also: MVCs in PHP, Intro to MVCs in PHP5, The controller, The view.

The problem with this is that your code is now fragmented between client and server, and implemented in different languages, possibly maintained by different programmers. Questions arise as to whether the bulk of your code should go into the server or the client, and of course, which model degrades best to account for accessibility?

Brad Neuberg has an excellent article on the pros and cons of each approach, and when you should choose which.

He still leaves my second question unanswered, but Jeremy Keith answers it with Hijax, his buzzword for hijacking a traditionally designed page with AJAX methods... in other words, progressive enhancement.

I've had thoughts that ran parallel to Jeremy's and it was quite odd that we ended up speaking about almost the same ideas at the same place and time. Well, he published and I didn't, so my loss.

Jeremy's ideas are spot on, but he doesn't mention implementation specifics, or whether the same code base can be used for more than just adding Ajax to an existing application.

More about MVC

The MVC pattern is great in that it doesn't state what your view should be, but merely that it should not be tightly coupled with your application model. Most implementers look at it as a way of designing an entire application around a single controller. Every action and subaction correspond to a controller branch, which in turn decides how data should be manipulated, and which view to call.

While this is good (if implemented correctly) at the high level, it is complex, and prone to bad design. It's not surprising that the big boys get wary when MVCs for web apps and PHP in particular are mentioned.

μMVC

If instead, we look at views as just views of data, and different views of the same data, then we end up with a different structure. Instead of selecting a view based on the action to be performed, we select a view based on the output format that the user wants. This may be HTML in the default case, and the top level controller would merely stitch various HTML subviews together to form the entire page. Each subview sent across to the browser as soon as it's ready to improve performance.

If the user has other capabilities though, we send data in a different format, and chances are, we don't need to send across all subviews. A single subview that's very specific to the data requested is sufficient. We do less work on the server, fewer database queries, send less data across the network and improve performance overall on client and server. The data format selected depends on the client application, and may be an html snippet that goes in to innerHTML, a JSON datastructure that gets parsed on the client side, javascript code that gets evaled on the client side, or XML data returned as a web service or for client side XSL transforms.

We use exactly the same data processing code for all requests, and only switch on the final function that transforms your internal data structures to the required output format.

I call this a micro MVC (μMVC) because the model, view and controller all act on a very fine granularity without considering overall application behaviour. Note also that the view and controller are now split across the server and client.

The client side controller kicks in first telling the server side controller what it's interested in. The server side controller performs data manipulation, and invokes the server side view. The client side controller passes the server side view to the client side view for final display.

This development model fits in well with the LSM framework which in turn leads to Progressive Enhancement and Graceful Degradation, and most of all, it opens up new avenues of accessibility without excessive degradation.

In part II of this article, I'll go into implementation details with examples in PHP and some amount of pseudocode.

Sunday, November 13, 2005

Why Foss in Education makes sense.

I'm supposed to speak at Foss.in on why FOSS makes sense in education. I chose the topic because it's something I'd worked on while I was still at NCST. The effective use of computers in children's education was very close to me.

This could well be the least technical post on this blog, but I've been having trouble getting coherency in my thoughts and I have to put it down for clarity. As has been the case in the past, this blog becomes a sounding board for me to dribble my thoughts. I'm not looking for comments, just trying to clear stuff in my head.

At Vidyakash 2002, it had been suggested that I get a hold of Seymour Papert's The children's machine. Papert worked under Piaget to study how children learn, and the results of these studies were the Logo programming language, and two successful books - Mindstorms, and The Children's Machine - the former being the inspiration behind Lego Mindstorms. I managed to get my hands on this book, and reviewed it for the Vidyakash Newsletter.

It had also started me thinking on how computers were being used in education. What follows are my thoughts.

The role of computers in education

I think before we proceed to decide on the tools to use, we need to know why we need computers in a school. What would we do with them, where would they be used, and who would use them?

I see three basic uses for a computer in a school.
  • Instruction Delivery
  • Instruction Enabling
  • Administration
The first two are those that I'm primarily concerned with, and the remainder of this post will be about those.

Both instruction delivery, and instruction enabling are interactions between a student and a teacher, where the latter may or may not exist. A teacher has traditionally been one who delivers instructional content to a student, and enables learning to take place. This generally means that the entire class will follow at the teacher's pace, and according to the teacher's will.

How does a computer fit in here?

Computers are excellent at instruction delivery. Primarily replacing the text book, however, rather than just throwing static text and pictures onto a computer screen, instructional content may be made far richer through the use of animations, sound, video and simulations. An instruction designer is required to effectively build this content.

Instruction enabling - in my terminology - has more to do with the computer being the target of learning. For example, one cannot teach C programming without a computer (although people have tried). The computer in this case is the laboratory within which students learn to apply the knowledge they've gained in the classroom.

If we concentrate only on computer education for a short part of this post, we could see that it is possible to merge the classroom and the laboratory into a single entity. Instruction delivery and experimentation can take place within a very real environment that is the computer, and in fact, the history of computer education is filled with examples of CBTs and web based courses.

Note: I haven't mentioned FOSS yet.

Now, let's drop the restrictions on our thoughts above and apply this to all forms of education.

Computers have been used in the past to teach Math, English, the sciences and various other subjects, but what has been the model followed?
Do we want the computer to program the child or the child to program the computer?
Too often, we've seen that CBTs flood the child with information that he has to memorise, and then throw tests at him to test his knowledge. He goes further once he's cleared all tests. This looks a lot like the way I program a computer. I throw a whole bunch of data at it, and then I write and constantly refine my code until it processes the data correctly, to give me my expected output.

Do we really want to create a generation of automatons? (automata?)

Instead, Papert shows a different model, and he takes the simple example of learning a language.

A child in Surat learns Gujarati with equal ease as a child in Toulouse learns French. In fact, several children in Surat learn both Gujarati and Hindi with that same ease. My grandma learnt Tamil, Telugu, Malayalam, Hindi and Bengali. At the same time, it's terribly hard for an adult to do the same. Most adults can never pick up a foreign language.

I've been in foreign language classes for adults for three languages, and in all cases, there have been people who pick it up really quickly, and there are those that never do. Invariably, it's the folks who would otherwise be considered childish, who pick up the language quicker. IAC, adult education is not the point here.

Learning is genetic

Papert suggests that the child in Toulouse and the child in Surat inherit learning from their respective environments. Learning through living as it were.(and I had a document to link to about this, but it no longer exists online). As we proceed through life, we pick up experiences through our various sensors - eyes, ears, nose, mouth, touch - and translate them into learning elements stored in our brains. Language learning is no different.

Can we somehow teach Math and Physics in the same way? Can we create a natural environment in which the rules of speech are not grammar and spellings, but mathematical identities or Newton's laws of motion?

Umm, yeah, Logo does that. It creates a Mathland and a Physicsland where children can learn math and physics by playing. The results are amazing, and terribly scary for teachers. Teachers need to accept that for once, a child may learn in an unplanned way. A child may come to an innovative solution that the teacher hadn't envisioned, in much the same way that Gauss summed the integers from one to a hundred when he was five.

Teachers need to be prepared to sit down and figure out a problem and its solution along with the student, and not by themselves, only to proclaim the solution hours later. It's the process of figuring it out that creates learning, not the process of listening to a clean room solution.

Debugging one's mistakes

Some experiences excite us and accelerate learning, while others scare us and slow it down, sometimes stopping it permanently. All too often, our teaching systems are designed to make children afraid of learning. We punish them when they make mistakes rather than showing them how to debug their errors and move towards a solution.

Enter FOSS.

FOSS is great for learning because the source code is available. Not just for reading, but for modification, and experimentation.

Those last two points are what makes pure foss projects different from source visible projects.

It's important to note, that price is not an issue here. Good software costs money, and can well cost a lot of money. One must be prepared to pay for the quality that one expects. The important gain that comes along with foss is the free laboratory that you get along with it.

We move back to computer education, because it's the easiest example to start with when talking about software.

For every topic within the computer science and engineering umbrella, we have several foss project that may act as the virtual 'lands' that we require. Each project can be a land in which the natural spoken language is the topic to be learnt. Operating Systems, Databases, Networks, Graphics, Communication, Multimedia, the list goes on. We have hundreds of lands, often overlapping, and the overlap can be a learning experience in itself. Much like a bunch of my cousin's kids from the UK who learnt Konkani after a month in Goa.

This is already happening in colleges, but at lower levels, mistakes are being made. Rather than giving them the ability to learn anything computer related, students are taught specific tools which leaves them vulnerable to change.

It's like teaching English poetry by covering only works by Yeats. The outcome is that students cannot recognise the works of Ogden Nash, or even CSNY as poetry.

Things must change.

Popularity begets obsolescence

When asked to switch from teaching tools like Microsoft Office in favour of generic topics like office automation applications, a common retort is, Should we not teach the current popular tools?.

The answer is a resounding no. Teaching specific tools, popular or not, leads to obsolescence when those tools cease to be in use, and who's to say that they won't. No one uses Wordstar, Lotus 123 or DBase today, yet these were the tools that we were taught to use in school. What should be the purpose of computer education?
  • Teach students to learn any tool
  • Let students learn through hands on experience
  • Throw responsibility into the hands of students
The idea should be to teach students concepts, and any tool that helps achieve this is good. Students should be exposed to a variety of tools, and the choice of specific tool should be theirs. A student may well choose the tool that gives him the edge when searching for a job.

Students can be put in charge of running the IT systems of a school. This will cut costs in a large way, and these student graduate from school/college with invaluable work experience that others only pick up after a year or two working in industry.

Why FOSS?

The big boys of FOSS all have their basis in education. Linux was started by Linus Torvalds to learn about the 386 architecture, and later to learn more about operating systems. LyX was written as a college project. The Gimp was written because its creators wanted to learn how to do graphical programming, and Gtk+ was born out of it because they wanted to learn how to write a good toolkit.

FOSS fosters education. For the persons contributing to it, and for the persons consuming it. The threshold for a user of Foss to become a contributor is extremely low - if we consider the different forms of contribution possible. Given the right language, it isn't hard for a domain expert to become a contributing developer.

Which brings us to other subjects.

Educational software already exists for non-computer related topics, and there is much FOSS to choose from. Software may be taken up and customised by a school. Specifically, students of higher classes could build or modify software for lower classes. These really do not have to be comp. sci. students. The emphasis here is not on getting the greatest algorithm implemented in code, or to squeeze out the last ounce of power from a low end machine. The emphasis is on applying domain knowledge to create a virtual world, on translating, for example, Newton's laws of motion to a set of rules by which a computer can build a simulation.

In Papert's experience, a child learns by teaching the turtle how to do stuff. The turtle here is a creature in the computer, and the child needs to teach this turtle how to first draw lines, then to use those lines to draw simple shapes, then to use those shapes to draw complex shapes, and further. In order to teach the turtle, the child must first figure out the steps herself, and that's where learning occurs.

As I write this, the same question keeps resounding in my head, "Ok, so this tells us how computers can be used effectively in education, but why Foss?".

The answer stems from the ability of foss to build on anothers ideas. Two students from different schools and different batches even may collaborate on the same idea. One may use libraries published by the other. The user of the library can gain insight into the ideas that went into building it, and can even suggest alternate approaches based on his or her usage of the library. Vinod Khosla seems to have similar ideas.

Academia is want to publish findings, results and papers. Foss is merely a solid implementation of that which is already published. Publishing one's learning as a Foss implementation spreads the knowledge and the discussion.

Much like Wikipedia allows users to collaborate on building information, so also, students should be able to collaborate in their learning. The output needn't be completely correct, but it must be debuggable, and therefore free and open.

So who is using Foss in education?

It depends on what we mean by using.
  • All of Mexico uses Foss in schools.
  • Several regions in France do too.
  • Schools in Virginia, Portland, Oregon, and several other states in the US.
  • Italian elementary schools regularly use Free Software.
  • Students in Melbourne run the IT systems of the school entirely.
However, none of the above are actual contributors to domain based foss. That's what we need primarily.

Learning is fostered more by doing, teaching and collaborating. Foss is based on all three of these, and is why Foss makes sense for education.

I'm not going to link to locations where one can find free software for education. This discussion has been less about that. It's more about learners contributing their learning as foss to improve the learning of others, and there aren't many links for that.

Other discussions

There have been discussions over the years about foss and linux in education, and stories of successful implementations. These are just a few of the links that I've collected. Most of these talk about implementing linux in a school's IT department.

Citations

This post was cited by the following papers:

Monday, January 24, 2005

End of line backslash on blogger

If a blogger post has a line that ends with a backslash, blogger will delete the backslash and the following newline character to merge two lines.

eg:
line1 \
line2

shows up as:

line1 line2

after posting, and in further edits.

They seem to be parsing the input as if it were a unix command line or something like that.

The solution is to put a space after the \

Saturday, January 15, 2005

You've got mail! - loud and clear

You've got mail, announces the cheerful voice at AOL.
People who don't use AOL as their ISP will have seen it in advertisements and in the movie too at least.

AOL's program doesn't tell you anything more than that though. Who's the mail from, what's it about, nothing. To do that, one needs to parse a mailbox for the sender and subject, and then use a TTS tool to say it out loud.

Today I installed festival. It's a pretty cool TTS tool — runs on various unixes, which means prolly MacOSX as well.

I played around with festival for a few minutes while additional voices downloaded, and then hacked up this:

#!/usr/local/bin/bash

lock=/tmp/newmailnotify.lock
[ -e $lock ] && exit
touch $lock

awk " /^From / {from_start=1;sub_start=1}
/^From:/ && from_start==1 {print; from_start=0}
/^Subject:/ && sub_start==1 {print; sub_start=0}" /var/mail/philip | \
tail -n2 | \
sed -e '1iYou've got mail
s/:/ /;s/ R[eE]://g;s/$/./' | \
festival --tts

rm -f $lock


Attached it to my Inbox Monitor, to run every time the mailbox size increased, and now I have a (rather drab) British voice announcing my new mail, along with who it came from, and what it's about.

Yes, the script could do with improvements. I'm currently too lazy to figure out why case insensitive matches aren't working with sed or why I can't use alternation in my regexes, but hey, it's past 2:30am

Comments and suggestions welcome.

Oh yeah, I planned on using a single lock file across users, because:
a. The audio device would be busy anyway
b. Parsing large mailfiles takes a lot of time and is disk intensive. I don't want more than one of these to run at a time.

Update:
Festival was having trouble with Indian names, and some of the mailing lists I'm on, so I added some entries to its lexicon. Unfortunately, couldn't figure out how to get those entries loaded. .festivalrc did everything, but select my lexicon. I think it selected the default lexicon after selecting mine.

The only solution was to convert my script up there to one that output a festival script (scheme) rather than plain text.

This is what I came up with:

#!/usr/local/bin/bash

lock=/tmp/newmailnotify.lock
[ -e $lock ] && exit
touch $lock

msg=$1
[ -z "$msg" ] && msg="You've got mail!"

awk --assign msg="$msg" ' /^From / {from_start=1;sub_start=1}
/^From:/ && from_start==1 {from=$0; from_start=0}
/^Subject:/ && sub_start==1 {subject=$0; sub_start=0}
END {printf("%s\n%s\n%s\n", msg, from, subject);}
' /var/mail/philip | \
sed -e 's/:/ /;
s/ R[eE]://g;
2,$s/$/./;
/^From/s/ </, </;
1i
(lex.select "philip")
(SayText "$a")
' | \
festival --pipe

rm -f $lock

and this is what my .festivalrc file looks like:
(lex.create 'philip)

(lex.set.phoneset 'mrpa)
(lex.set.lts.method 'oald_lts_function)
(lex.set.compile.file "/usr/local/share/festival/lib/dicts/oald/oald-0.4.out")

(lex.add.entry '("sachin" n ((( s a ) 0) (( ch i n ) 1))))
(lex.add.entry '("vinayak" n (((v ii) 0) ((n ai) 1) ((@ k) 1) )))
(lex.add.entry '("amarendra" n (((a m) 0) ((@) 0) ((r ei) 1) ((n d r @nil) )))
(lex.add.entry '("vijay" n ((( v ii ) 0) (( ch ei ) 1))))
(lex.add.entry '("ilug-bom" n (((ai ) 1) ((l @ g ) 1) ((b o m) 0) )))
(lex.add.entry '("linuxers" n (((l i) 0) ((n @ k s @ r z ) 1) )))

Interestingly, it reads out mm.ilug-bom as millimetres dot i-lug-bom.

The other changes in the script allow you to customise your leadin message, and also ensure that From is read out before Subject.

Festival has an email mode, but modes only work when reading from a file or using the (tts 'filename mode) syntax. Since my input comes from stdin, there's no way to specify it.

Update 2:

Inspired by jace, I decided to try using procmail for this. The only change to the script is that /var/mail/philip is no longer in there. It reads from standard input. My procmail recipe looks like this:
:1 c

*^From:
|/home/philip/bin/newmailnotify.sh
and I put it at the end of .procmailrc.

I haven't yet been inundated with a deluge of emails, so don't know how it will work with bulk downloads. This of course runs after mails are sorted into folders, so only those that still make it to my inbox get reported.

Friday, January 14, 2005

Sigdashes

Sigdashes are a (de facto) way of specifying where your mail ends and your signature starts. They're pretty cool, because smart mailers and newsreaders can do funky things when they notice sigdashes.

For example, many mail clients will strip off old signatures when replying to mails. This is a Good Thing, because, hey, just one signature per mail ya?

Many mail clients, like mutt, can display signatures in a different colour or font.

So, what /are/ sigdashes?

The character sequence "dash dash space" on a line by themselves are collectively known as sigdashes. It looks something like this (without the quotes):
"-- "

Configuring your mail client to use sigdashes:

Pine:
Setup | Config
- Composer Preferences | Enable Sigdashes
- Reply Preferences | Strip From sigdashes in reply

Mutt: (sigdashes on by default)
in .muttrc, add
set sig_dashes=yes

unless it's set to "no" in /etc/Muttrc or ~/.muttrc, you do not need to do anything.

Thunderbird (via TagZilla):
In the TagZilla | Formatting screen, set Tagline Prefix to (without quotes)
"\n-- \n"

Thunderbird (no TagZilla) / Evolution / Web based mail:
Include the sigdashes line as the first line of your signature file/text.

Kmail / Outlook Express:
(No idea)


Go forth and spread the good news.

Saturday, September 25, 2004

Fallback Messaging

One of the things that drew me to Everybuddy, its USP really, was fallback messaging. I haven't seen any other client (other than eb's offspring -- ayttm and eb-lite) implement this feature, which is why I've never switched to another client.

So, what is fallback messaging?

Consider two friends who communicate via various network oriented means (IM, Email, SMS, etc.), we'll call them bluesmoon and mannu (because they are two people who communicated this way for several years before meeting IRL). Now, said friends are extremely tech savvy, and have accounts on virtually every server that offers free accounts, and then some.

So, you've got them on MSN, Yahoo!, AOL, ICQ... um, ok, not ICQ because ICQ started sucking, Jabber, and that's just IM. They prolly have 3 or 4, maybe 5 accounts on each of these services, ok, maybe just one on AOL. Then they have email accounts. The standard POP3 accounts, 3 gmail accounts, a Yahoo! account for every yahoo ID, and likely no hotmail accounts (even though they have MSN passports) because we all know that hotmail is passé.

These guys also have lj accounts and one or two cellular phones on different service providers.

Ok, now that we have our protagonists well defined, let's set up the scene.

Act 1, scene 1

Mannu and bluesmoon are chatting over, umm, we'll pick MSN to start with. So mannu and bluesmoon are chatting over MSN, when all of a sudden <insert musical score for suspense here> a message pops up:

The MSN server is going down for maintenance, all conversations will now end

seen it before right?

Sweet. So MSN decides that we're not allowed to talk anymore.

What are our options? Oh well, Yahoo!'s still online, so switch to Yahoo!. It's much nicer too because you can chat while invisible too. MSN (officially) doesn't let you do that.

So, they switch to Yahoo!, but... what the heck were they chatting about when the server went down? Context lost. They need to start a new conversation, most likely centred around cursing MSN. What's worse is that the earlier conversation was being archived because they may have needed it as a reference later. The new conversation can also be archived, but it's a pain to merge all these different archives later.

Anyway, they plough ahead. The conversation veers back on topic, ... but now the main net connection goes down. The only things that work are websites and email. What do you do? What do you do? Ok, Dennis Hopper I am not, so let's forget I said that.

The easiest option would be for bluesmoon to send a mail to mannu saying, "Hey dude, my net connection went down, gotta end the convo here.", or he could send the same in an SMS. But to do that he's gotta start yet another program and type out stuff out of context again, or worse, type out an SMS that he has to pay for!

So, here's where fallback messaging comes in.

Act 1, Scene 1, Take 2

<jump back to the MSN chat>

Where were we? Oh yeah, the MSN server goes down. Now, what if the chat client we were using was smart enough to figure this out, and do something about it. What's that something you say? Switch to the next available service. So, in this case, the chat program would automatically and seamlessly switch to using Yahoo!

There's several user centric pluses here. The people chatting do not need to know that a server went down, leave alone care about it and figure out what to do. Archives will be maintained across sessions. The context of the conversation will be preserved. Mannu and bluesmoon can go on chatting as if nothing happened.

If all the IM protocols go down, the chat client could switch to Email or SMS. Of course, mannu should have to tell it explicitly to use one of these, because the conversation will no longer be online. There's gonna be delays between sending the message and getting a response, so the chatters need to know about this.

So, how does your chat client know that you have buddies on multiple services, and about their email address and phone number?

Well, the chat client would have to group accounts on various services into a single contact. This kind of grouping also has other benefits.

Two people chatting with each other now don't have to think about user names and different services and what not. Mannu wants to chat with bluesmoon, he just selects bluesmoon from his buddy list. He doesn't have to care whether bluesmoon has an account on MSN, Yahoo!, AOL or whatever. Why should he care? So, I'd be chatting one to one with another person, without caring about what happens behind the scenes. Isn't that what makes for a good play?

Well, at some point mannu would have to care about services and user names, because he'd actually have to manually add and group all these accounts into one. Perhaps he could also set preferences of the order in which to fallback. That's all a one time set up. For the continuous ease of use to follow, I'd say it's worth it.

Final questions...

Is this really possible? Yeah, sure it is. You can thank Torrey Searle for that. Torrey implemented everybuddy to do just this, and threw in loads of sanity checking - thanks for that dude. It's what drew me to use and then work on the project for so long.

So, is this really possible? Probably not until IM companies decide that the network is just a transport, and it's the value a user derives from using that transport that makes them choose one service over another. It's why we choose the Mumbai-Pune expressway over NH4 that runs through the ghats, even though there's a toll.

Update I did a talk on fallback messaging at Linux Bangalore 2004

Friday, June 30, 2000

Hackers are Not Crackers

First written: 30-June-2000
Updated: 6-Nov-2000

My name is Philip Tellis and I love playing with computers. I have written this small primer on hackers and hacking meant to inform people of the correct terminology to be used. Much more information is available at the references mentioned in this article.

The Internet in India is growing rapidly and with it, several new business models, entertainment avenues and educational opportunities. The Internet has also exposed us to security risks that come with connecting to a large network.

The media has always latched on to stories of so-called `hackers' breaking into computer systems and wreaking havoc. This article is a sincere attempt to set the record straight as far as the terminology and process of `hacking' is concerned.

The hacker culture as it is known, actually started way back in the 1950's when computers were huge and bulky, and programming them meant connecting wires to electrodes. Although they didn't call themselves hackers then, that pretty much explains what a hacker is.

The new hacker's dictionary defines a hacker as:
hacker n.
  1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary.
  2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than just theorizing about programming.
  3. A person capable of appreciating hack value.
  4. A person who is good at programming quickly.
  5. An expert at a particular program, or one who frequently does work using it or on it; as in `a Unix hacker'. (Definitions 1 through 5 are correlated, and people who fit them congregate.)
  6. An expert or enthusiast of any kind. One might be an astronomy hacker, for example.
  7. One who enjoys the intellectual challenge of creatively overcoming or circumventing limitations.
This term seems to have been first adopted as a badge in the 1960s by the hacker culture surrounding the `Tech Model Railroad Club' (TMRC) and the MIT AI Lab. It was probably used in a sense close to this by teenage radio hams and electronics tinkerers in the mid-1950s.

All computer systems that we use today are based on research done by early hackers. Much of this research was done out of love for the subject, with no personal gain other than fame amongst the community. Hackers built the internet. Hackers built and maintain usenet. All internet related
business today owes its origin to hackers.

The hacker community is a 'meritocracy based in ability'. Membership must be earned. One does not call oneself a hacker until other hackers recognise one as such. There is a certain ego satisfaction to be had in identifying yourself as a hacker.

Some of the more famous hackers of lore are Steve Jobs and Steve Wozniak - the founders of Apple Computer, Bill Gates - more of a hacker during his teens than later, Linus Torvalds - the guy behind linux, Richard Stallman - founder of GNU, Larry Wall - author of Perl, Bill Joy and James Gosling from Sun Microsystems, Dennis Ritchie and Ken Thompson from AT&T, Bjarne Stroustroup - author of C++. Many of these hackers have reached demigod status in the community and are still active hacks.

Around 1980 or so, a new breed of computer fed kids came up. With easy access to the internet in the US and Europe, they soon realised that they could easily break into other people's systems and do what they wanted. They called themselves hackers too. This was really unfortunate, because the name kind'a stuck.

Real hackers do not consider such security breakers to be hackers. The term preferred for such persons is cracker:
cracker n.

One who breaks security on a system. Coined ca. 1985 by hackers in defense against journalistic misuse of hacker (q.v., sense 8). An earlier attempt to establish `worm' in this sense around 1981-82 on Usenet was largely a failure.
While it is true that many hackers posess the skills for cracking, anyone past larval stage is expected to have outgrown the desire to do so except for immediate, benign, practical reasons.

Contrary to popular belief (amongst non hackers), there is far less overlap between hackerdom and crackerdom. Very often it has been thought that there is a very thin line between being a hacker and being a cracker. Several debates have been initiated on usenet and in geek media.

The basic difference between hackers and crackers is this:
hackers build things, crackers break them.
As a hacker, I build software programs for others to use. There is nothing illegal or shameful about the hacking I do. Most of my software is given away with the freedom to modify, reuse and redistribute with the only restriction being that these freedoms are always included. My hacks are meant to help other people, not hurt them.

With the introduction of the IT Bill, it is important that these facts be made public so that the culture of hackers in India do not have to be ashamed to admit who they are. It is also important to ensure that they are not seen as criminals in the eyes of the law. The law must clearly define what a `cybercrime' is and state clearly that hacking is not one of them. Cracking is. Make hacking a crime and one will have to charge every single proficient and competent computer programmer in this country.

This article seeks only to introduce the proper terminology. There is far more information available on the Internet, and I urge you to read through it. For starters run through Eric Raymond's essay on `How to become a hacker' Read through the jargon file and `A Brief History of Hackerdom' also at the same site. Then, browse down to GNU and read the philosophy of free software. You may also want to get hold of a copy of `The New Hacker's Dictionary' and `Open Sources' from O'Reilley.

Hackers have a bad name primarily because of the way the media spreads reports of `hacking'.

In April 1988, ZDnet was conducting a survey. They use the word `hacker' to mean `cracker', but their readers don't. Greg Lehey reports that he found approximately 80% of the responders agreed that a hacker is as defined above and not the same as a cracker. I wonder how much that has changed in the last 12 years.

We request that you try to make things right. From now on, when you say hack, make sure you mean hack and not crack. You owe hackers an apology for spoiling their name, but most of all, you owe them respect.

Parts of this article have been taken from sources mentioned here, most notably, the jargon file and the hacker-howto by Eric S. Raymond. Please do read the originals.

Check out Greg Lehey's The term ``hacker'' as well.

The jargon file can be found at: http://www.tuxedo.org/~esr/jargon/html/Introduction.html

The hacker howto can be found at: http://www.tuxedo.org/~esr/faqs/hacker-howto.html

...===...