You are viewing ironchicken

Richard Lewis [entries|archive|friends|userinfo]
Richard Lewis

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Internet Free (beer) [May. 25th, 2010|12:01 am]
[Tags|, , ]

James Murdoch gave a lecture for the opening of the new Centre for Digital Humanities at UCL on Thursday. I wasn't there; it was invitation only and also I was at the Barbican listening to the LSO playing Turangalila. Fantastic!

I did, however, read the transcript of his lecture and wanted to make a few comments on content freedom.

Murdoch's view is essentially that content published online (and especially journalism) is a kind of commodity and that it must be paid for, or, in his reasonable and unagressive terms, content producers should be allowed to "assert a fair value for their online editions."

However, as well as being pro-paid for content, he's also quite anti-free content. He describes what he calls the "digital consensus": that the virtuality of the internet requires that content published on it be free; that free and pervasive availability of content leads to a better society ("wiser, better informed and more democratic"). His references to "utopian" narratives possibly (but not explicitly) betray a dislike for a kind of hippie culture of the early days of the internet.

Similarly, he's quite critical of the British Library's intention to digitise and publish online free of charge archives of newspapers. He describes how doing this helps them to secure additional public funding and seems to argue that is an unfair form of competition: the BL is getting paid to publish free content, while media companies have to compete to sell their content. He's critical of the claims academic institutions make for justifying their publication of content online free of charge on the grounds of increased access, preservation, and scholarly interest, arguing that ultimately they stand to gain financially from doing so.
"When we look over this terrain, we can see the economic pressures driving down the value of content are very powerful. Arguments over rights and wrongs seem little more than a disguise for self-interest."

He gives a brief account of the history of British copyright law, arguing that it was established to help protect the interests of content producers and that it must still play that role today. He argues that even those who do wish to publish their content free of charge stand to gain from copyright law, and that copyright incentivises content production.

Despite this concession, he later makes clear his stance on free content:
"If you want to offer your product for free, then there is nothing to stop you --- and it's a lot easier these days to do so. The only temptation you need to resist is the idea that what you want to do is what everyone else should be made to do."

Further, he argues that the future of the "creative industries" should be considered in an "economically serious way". This is probably the most revealing comment he makes betraying his attitude towards freedom of content: it's unserious; it's silly, hippie utopianism that can't stand up to the might of capitalist media imperialism. Those who are involved in free content cannot be serious about what they produce.

Finally on the subject of objecting to the free, he argues that if news producers do not charge for their content, then the only people who would be able to produce news would be "the wealthy, the amateur or the government." Of course, in some regimes state-sponsored news may be biased and even harmful, but this happens not to be the case with the BBC which has a remit of impartiality. But even more concerning is his implied assumption that the private sector is likely to produce higher quality, and less partial news content than governments, amateurs and the wealthy. In fact, private sector content producers (especially in news) rely on, and therefore are subject to the opinions of, the wealthy. These wealthy are actually often responsible for the nature of how news gets reported and even what news gets reported by private sector news businesses.

My main criticism, then, is of his assumption that paid for, private sector content is necessarily better than free and/or public sector content. However, there are two other points I find interesting in this lecture.

First is his use of the term "content" to describe and generalise the published work he wants to protect. "Content" is a very digital age notion, it implies a late twentieth century conception of knowledge capital and of knowledge work that seeks to reduce literature to information that can be quantified, homogenised, stored, transmitted. Content (in this sense) has de-coupled arts from practice: inscriptive mechanisms---printing, recording, digital encoding---change the nature of art works from practice to text. It's this text that Murdoch obsesses over, while creative practitioners, in fact, are increasingly returning to art as practice. His failure to realise the importance of practice against content de-values his universal claims for paid-for's supremacy.

The other is his conception of the "humanities". At one point he effectively equates the humanities with "the creative industries". He also (when describing the importance of private sector news production) appeals to those who "really care[s] about the humanities of tomorrow" to feel the same as he does about private ownership of media. This implies a total lack of understanding of the critical (by which I mean being critical) role the humanities must play. Humanities cannot be privately sponsored and subject to bias and politicisation. Humanities must be independent, state-sponsored, and provide a voice of criticism in the world of content production, business and politics.
link2 comments|post comment

e-Research on Texts and Images [May. 12th, 2010|04:40 pm]
[Tags|, ]

I went to a colloquium on e-Research on Texts and Images at the British Academy yesterday; very, very swanky. Lunch was served on triangular plates, triangular! Big chandeliers, paintings, grand staircase. Well worth investigating for post-doc fellowships one day.

There were also some good papers. Just one or two things that really stuck out for me. There seems to be quite a lot of interest in e-research now around formalising, encoding, and analysing scholarly process. The motivation seems to be that, in order to design software tools to aid scholarship, it's necessary to identify what scholarly processes are engaged in and how they may be re-figured in software manifestations. This is the same direction that my research has been taking, and relates closely to the study of tacit knowledge in which Purcell Plus is engaged.

Ségolène Tarte presented a very useful diagram in her talk explaining why this line of investigation is important. It showed a continuum of activity which started with "signal" and ended with "meaning". Running along one side of this continuum were the scholarly activities and conceptions that occur as raw primary sources are interpreted, and along the other were the computational processes which may aid these human activities. Her particular version of this continuum was describing the interpretation of images of Roman writing tablets, so the kinds of activities described included identification of marks, characters, and words, and boundary and shape detection in images. She described some of the common aspects of this process, including: oscillation of activity and understanding; dealing with noise; phase congruency; and identifying features (a term which has become burdened with assumed meaning but which should also be considered at its most general sometimes). But I'm sure the idea extends to other humanities disciplines and other kinds of "signal" or primary sources.

Similarly, Melissa Terras talked about her work on knowledge elicitation from expert papyrologists. This included various techniques (drawn from social science and clinical psychology) such as talk-aloud protocols and concept sorting. She was able to show nice graphs of how an expert's understanding of a particular source switches between different levels continuously during the process of working with it. It's this cyclical, dynamic process of coming to understand an artifact which we're attempting to capture and encode with a view to potentially providing decision support tools whose design is informed by this encoded procedure.

A few other odd notes I made. David DeRoure talked about the importance of social science methods in e-Humanities. Amongst other things, he also made an interesting point that it's probably a better investment to teach scholars and researchers about understanding data (representation, manipulation, management) than it is to buy lots of expensive and powerful hardware. Annamaria Carusi said lots of interesting things which I'm annoyed with myself for not having written down properly. (There was something about warning of the non-neutrality of abstractions; interpretation as arriving at a hypothesis, and how this potentially aligns humanistic work with scientific method; and how use of technologies can make some things very easy, but at the expense of making other things very hard.)

Also, I gave a talk at Goldsmiths Spring Review Week today. It's basically a chance for Ph.D students to get together and talk about what they've been doing all year. One interesting aspect of it is that it's Ph.D students from all departments, so you have to assume that your audience are non-expert. I spoke about "Computational Approaches to Scholarly Procedure in Musicology". (See, I told you everyone is thinking the same way as me!)
linkpost comment

Decoding Digital Humanities: Procedural Literacy [Apr. 19th, 2010|01:00 pm]
[Tags|, ]

I went to the second Decoding Digital Humanities meeting last week. We read a paper suggested by me on procedural literacy: understanding the ideas behind computer programming, even if not being familiar with any specific language. As expected, a number of interesting points and criticisms were raised.

A suggestion was made that the paper tried to equate computer languages to human (or "natural", as computer scientists often call them) languages. I would argue that this wasn't intended by the author. He talks about communicating ideas of process through computer languages, but doesn't argue that they can be used for other kinds of communication.

This discussion did lead on to the question of whether this kind of thinking really just essentialises technology. It was argued that computer programming shouldn't been seen as a panacea for forcing focus. Although attempting to express ideas in formal languages undoubtedly does force the focusing of those ideas, requires their explicit and unambiguous expression in terms which the computer can understand, should programming be seen as the only method of doing this? It's probably the case that any writing exercise will force focus of ideas.

Also raised was the criticism against digital humanities of "problematising innovation". Perhaps arguments against DH are more arguments against unsettling the status quo? What can these changes possibly have to do with our discipline? Of course, the point was made that the current stratification of disciplines in universities is quite a modern construction and is most likely subject to continual change, whether the impetus be intellectual, geographic, or economic.

It was suggested that the argument in the paper may be a classic example of the problematic role of the critic: is it valid to criticise a practice in which you yourself are not skilled? This reminded me of one recent body of scholarship I've engaged with by Harry Collins, a sociologist of science. He describes the difference between "contributory expertise" and "interactional expertise". Contributors to a discipline are those who are trained in the practices of the discipline - who conduct experiments and generate new knowledge. But Collins also argues that there's a class of expertise he calls "interactional" in which the expert has engaged with other practitioners in the discipline to a sufficient extent that he can hold conversations with them and understands all the important principles.

Perhaps procedural literacy could be a class of interactional expertise, rather than a necessarily practical engagement?
linkpost comment

Been Cited [Apr. 16th, 2010|06:33 pm]
[Tags|]

I've been cited by a blogger in a pseudo-academic fashion! Very exciting.
linkpost comment

Ways into Humanities Computing [Mar. 29th, 2010|06:15 pm]
[Tags|]

Some time ago (probably mid-November 2009) I read a short article (Joseph Raben, Introducing Issues in Humanities Computing, DHQ 1:1 Spring 2007) in the first issue of Digital Humanities Quarterly which ends with a series of questions to be asked about humanities computing, its nature, outcomes, effects. I made a note to myself to answer these questions and have finally got round to having a go. Some I have no idea how to answer, some I can give a few opinions on, and some I know I need to say a lot more about.


Can software development, rather than conventional research, serve as a step up the promotion ladder?

So does software development count as a valid research output? This problem can be generalised to the concerns of practice-led research. Does the scholarly community accept work such as musical compositions, painting and sculpture, biographies, digital art, and fiction as valid research outputs? There are certainly structures in place which allow scholars up to and including doctoral level in arts and humanities areas to have evidence of their practice considered as part of their research. And disciplines which include engineering components such as computer science often produce doctoral theses which include substantial practical components. But beyond doctoral level the accepted product of research, in the arts and humanities at least, becomes homogeneous with the mode of its communication: journal articles, conference papers, book chapters, monographs. However, humanities computing stands at an interesting intersection between a humanist discipline and an engineering/science discipline. It's broad questions are likely humanistic (to make observations about the human condition based on evidence of human activity), but its methods may be more related to computer science (development and use of software). Which of these two components of the research (findings and methods) are the most publication worthy? My own opinion is increasingly that software is a means to capture and express procedure and that procedure in scholarship (as well as other areas) should be considered a valid object of study.

Are there better ways to organize our information than the current search programs provide?

How far should we trust simple information retrieval methods to tell us what is relevant and interesting? The idea of automatic relevance ranking based on keyword matching does seem a bit dry and inhuman, but it has certainly become commonplace. Computers are now relied upon to make judgements of similarity, and not just with text; there's a whole field of study which attempts to get computers to make judgements of musical similarity.

How do we confront the trend toward English as a universal scholarly language in the face of objections, such as those from France? How far need we go in accommodating other world languages---Spanish, Russian, Chinese?

[...]

How concerned should we be about the consequence of Web accessibility undermining the status of major research centers in or near metropolitan cities?

I've used access grid, I regularly talk with colleagues in IRC channels, I use Skype and instant messaging tools and, of course, make regular use of email. I've also watched/listened to recorded lectures. But I'm not convinced that any of these things really replace the nuances of human communication which may be vital for serious discussion and networking.

Has the availability of the Internet as a scholarly medium enhanced the academic status of women and
minorities?

[...]

Will humanists' dependence on computer-generated data lead to a scientistic search for objective and reproducible results?

This, of course, assumes that humanists will become dependent on computer-generated data, and that they will interact with that data via computational means. I oppose this to mere digitisation, in which artifacts of scholarly interest (such as manuscripts, printed texts, paintings) are merely transcribed onto a digital medium and made more easily accessible; the mode of interaction with such digitised artifacts is often non-computational, it's just a more convenient way of looking at them. Genuine computational interaction with artifacts, on the other hand, may well call for new understandings amongst humanist scholars, and lead to new priorities and concerns in their research. I see two such potential major changes. Computational techniques may require that the tacit knowledge and implicit procedures that humanist scholars use become explicit and reproducible by being encoded and published in software, somewhat reminiscent of the necessarily pedantic detail used in "methods" sections of scientists' papers. The other change relates to humanists embracing the opposite of their typical close reading paradigm, adopting "distant reading" techniques. The question of what you can do with a million books requires that a scholar knows how to deal with the quantity of information contained in such a corpus. This includes learning to generate valid statistics and to draw legitimate conclusions from them. Whether or not any of this counts as objective is another matter.

Can we learn anything about today's resistance to new technologies from studying the reactions in the Renaissance to the introduction of printing?

[...]

Will digital libraries make today's libraries obsolete?

I can't imagine using a card index over an OPAC (online public access catalogue), and having online access to journal literature is infinitely more convenient than browsing through dusty old archives in library basements. I'm also very keen on digitisation projects as a way of opening up access to important (and maybe also seemingly not to important) artifacts to scholars. Access to information and resources from your desktop is certainly a major advantage. But we will still require institutions which foster and make use of information expertise. Catalogues are only as good as the people who design and maintain them. These are certainly the domains of expertise of libraries and, I imagine, will continue to be so. There is also the question of serendipity in library browsing; simply scanning the shelves can sometimes turn up items which would probably never have been the subject of a "relevant" keyword search.

Are the concepts and development of artificial intelligence relevant to humanistic scholarship?

Why ask this question? Is it because artificial intelligences may take on the status of human agents whose thoughts and actions could be argued to be in the domain of interest of humanist scholars? Is it because artificial intelligences may be able to perform the same functions, make the same judgements and arguments as human humanists? Or even that the whole artificial intelligence project (and its wider context of enquiry into the nature of human cognition and intelligence) could be the subject of a humanities study?

linkpost comment

Decoding Digital Humanities [Mar. 20th, 2010|10:59 am]
[Tags|, ]

I went to what amounts to a digital humanities pub meeting on Tuesday. It was called Decoding Digital Humanities, held at a pub near UCL, and organised by the new Centre for Digital Humanities at UCL. There must have been about 30 participants, though mainly from UCL.

They set some reading: Walter Benjamin Art in the Age of Mechanical Reproduction and the Wikipedia article on Digital Humanities. The Benjamin evoked discussions on aspects of digital humanities which I've never really considered seriously; the whole area of new media art, digital literature, interactive narrative. It seems I've always considered digital humanities (like its non-digital parent) an analytic discipline rather than a synthetic one.

The Wikipedia article, owing to its broadly expository nature, seeded a discussion on the definition of digital humanities. Like many of these discussions, I was left with the impression that DH has more to do with creating digitised versions of the kinds of artifacts that are of interest to humanist scholars, although there was also discussion of the changes that digital humanities may bring about not only to the traditional humanities, but also to computer science. It was suggested that computer scientists find working with humanist data sets interesting and challenging because of their fuzziness. This point also lead to an interesting question: what is the correct/a good technical term for describing this kind of qualitative data?

But the discussion never quite got to considering what the valid questions of digital humanities might be. What are the techniques that make digital humanities digital? Is digital humanities just computer-assisted humanities, easy and interactive access to publications, manuscripts and other artifacts? Or is there a new programme of scholarship which computational methods may make possible?
linkpost comment

Blogging [Mar. 19th, 2010|05:35 pm]
I have a recurring TODO note in my org-mode which reminds me to write a blog post every week. This reminder also offers me the possibility of make a small note each time I mark it as DONE. Since 11 August 2009 I've CANCELLED every single one of those reminders, but I've always left myself a note. So here is the unaltered log of all my failed blogging attempts since 11 August 2009. Consider it a kind of sub-blog.

  - State "CANCELLED"  from "TODO"       [2010-03-15 Mon 21:45] \\
    Purcell Plus meeting. Very good Evensong (Rach Bog, Sumsion in A,
    Copi pieces). Saw Ali.
  - State "CANCELLED"  from "TODO"       [2010-03-08 Mon 11:27] \\
    Parents visiting. Horniman museum.
  - State "CANCELLED"  from "TODO"       [2010-02-28 Sun 12:21] \\
    Can't think of anything in particular. Luc Steels talk was good.
  - State "CANCELLED"  from "TODO"       [2010-02-21 Sun 22:06] \\
    Went to Bromley. Mozart Requiem. Meeting with Alex about Beyond
    Text. MLI conference. Discovered importance of relating computer
    science as science of procedure with procedure in scholarship.
  - State "CANCELLED"  from "TODO"       [2010-02-12 Fri 18:46] \\
    No blogging today
  - State "CANCELLED"  from "TODO"       [2010-02-12 Fri 11:59] \\
    No blog this week
  - State "CANCELLED"  from "TODO"       [2010-02-02 Tue 11:21] \\
    Zoe went to see the new baby.
  - State "CANCELLED"  from "TODO"       [2010-01-24 Sun 14:07] \\
    Choral evensong with Leighton! Birthday. V&A Decode exhibition.
  - State "CANCELLED"  from "TODO"       [2010-01-18 Mon 17:18] \\
    Live coding event at KCL
  - State "CANCELLED"  from "TODO"       [2010-01-11 Mon 17:54] \\
    I did blog, just not about my week
  - State "CANCELLED"  from "TODO"       [2010-01-02 Sat 12:26]
  - State "CANCELLED"  from "TODO"       [2010-01-02 Sat 12:25] \\
    Christmas at mum and dad's (lots of snow and ice). New year at home.
  - State "CANCELLED"  from "TODO"       [2009-12-21 Mon 11:14] \\
    SBCL 10. Carol service. Where the Whild Things Are
  - State "CANCELLED"  from "TODO"       [2009-12-17 Thu 11:13] \\
    RCUK e-Science review. All Hands Meeting.
  - State "CANCELLED"  from "TODO"       [2009-12-07 Mon 09:04] \\
    Crib service. Mum and dad visiting. Doing RCUK poster.
  - State "CANCELLED"  from "TODO"       [2009-11-29 Sun 22:03] \\
    Advent carol service. Real-time. I did actually blog about CLSQL
    and MySQL table name case sensitivity
  - State "CANCELLED"  from "TODO"       [2009-11-23 Mon 09:37]
  - State "CANCELLED"  from "TODO"       [2009-11-17 Tue 17:59] \\
    It was very windy on Saturday. Zoe is away in Ghent. I had a bit
    of a fail with marking the relations tests (but did quite well
    on learning about them).
  - State "CANCELLED"  from "TODO"       [2009-11-03 Tue 09:26] \\
    Mum and Emma visited to see Breakfast at Tiffany's. Durufle
    reqiuem. Zoe away.
  - State "CANCELLED"  from "TODO"       [2009-10-26 Mon 17:16] \\
    UEA seminar. SMITF broadcast.
  - State "CANCELLED"  from "TODO"       [2009-10-17 Sat 12:39] \\
    Trying to write UEA paper. Got caught by libpthreads bug which
    disabled X server. Set up kernel mode switching. Seems that
    this causes xrandr not to work, and DisplaySize not to work,
    DPI to be broken, etc. etc.
  - State "CANCELLED"  from "TODO"       [2009-10-12 Mon 09:10] \\
    Went to Thomas Dixon lecture. Was ill most of the week.
  - State "CANCELLED"  from "TODO"       [2009-10-05 Mon 12:56] \\
    Sang your first solo. Zoe got an AAO.
  - State "CANCELLED"  from "TODO"       [2009-09-27 Sun 22:22] \\
    We went to Ham Hall. Sang first evensong at SMITF. Got some
    teaching jobs. Learned to use ssh-agent.
  - State "CANCELLED"  from "TODO"       [2009-09-21 Mon 09:44]
  - State "CANCELLED"  from "TODO"       [2009-09-15 Tue 15:03] \\
    You went to see Dido and Aeneas and it was great! You also
    still need to blog about DRHA.
  - State "CANCELLED"  from "TODO"       [2009-09-05 Sat 18:23] \\
    I'll blog after DRHA
  - State "CANCELLED"  from "TODO"       [2009-09-01 Tue 09:15] \\
    Must get back in to blogging
  - State "CANCELLED"  from "TODO"       [2009-08-24 Mon 09:47] \\
    Did a review, but no blog
  - State "CANCELLED"  from "TODO"       [2009-08-17 Mon 10:19] \\
    See weekly report excuse.
  - State "CANCELLED"  from "TODO"       [2009-08-11 Tue 11:13] \\
    Was away in Ross for Three Choirs
linkpost comment

Kernel mode setting and hibernate [Jan. 20th, 2010|12:14 am]
[Tags|]

I've been having a few problems with resuming from suspend to disk (or hibernate) recently. Essentially, I end up with a blank screen after resuming. Occasionally, I can switch VT (using Ctrl Alt F1 for example), but more often I can't do anything but a cold reboot.

At the same time I've also noticed some changes in xrandr (the screen resolution switching tool for X); different resolutions are available for my external monitor and the displays' names have changed (from LVDS to LVDS1 for example).

This lead me to wonder if the two things may be connected. It turned out kernel mode setting has been enabled on my Debian unstable system. And I'm guessing that this may be causing problems with X resuming correctly.

I've altered /etc/modprobe.d/i915-kms.conf to contain:
options i915 modeset=0

and (as I'm now using GRUB2) added "nomodeset" to GRUB_CMDLINE_LINUX in /etc/default/grub. (For old GRUB, it used to be a case of editing /boot/grub/menu.lst.) I'm not sure whether both of these changes are necessary (I only know that the nomodeset option by itself wasn't sufficient), but now kernel mode setting has been disabled and resuming from hibernation seems to be working OK again.
linkpost comment

Bye bye DEs [Jan. 10th, 2010|11:18 pm]
[Tags|, ]

I've tried two different full desktop environments under Linux: KDE and GNOME. I used KDE quite a lot up to and including the last 3.5 release. When KDE 4.0 happened I jumped ship and configured myself a nice little Fluxbox environment. I always had one problem with Fluxbox, though, and that was that GTK Emacs rendering was too slow; often when scrolling through a file, the display simply wouldn't update quickly enough to be able to see the contents as it moved. The only solution I had for this was to use GNOME, in which GTK Emacs rendering was fine. This all very well except for all the bloat of GNOME; it takes quite a while to start and provides lots of features that I never use.

Yesterday I realised (can't think why I hadn't thought of it before) it might be Metacity that makes GTK Emacs rendering work and I could just use Metacity by itself without the rest of GNOME. So I found out how to start an X session with just Metacity, fired up Emacs and sure enough, it works beautifully.

So I've now abandoned full desktop environments in favour of simple window managers with a few handy applications.

The main thing I had to learn was how to configure a custom X session. It turned out to be a simple matter of creating an ~/.xinitrc script which starts any programs I want running in my session and then calls exec metacity. I also symlinked this to ~/.xsession (I'm not sure which actually does the business for GDM). Then from GDM I choose 'Run Xclients script' and I'm logged in to my custom session.

My ~/.xinitrc file looks like this:
#!/bin/bash

xset b off & xset r rate 195 60 & synclient TapButton1=1 &

export OOO_FORCE_DESKTOP=gnome
export LANG="en_GB.UTF-8"
export LC_ALL="en_GB.UTF-8"
export LANGUAGE="en_GB.UTF-8"
export LC_CTYPE="en_GB.UTF-8"

xmodmap ~/.Xmodmap & gnome-settings-daemon &

gdesklets --no-tray-icon start conkeror -daemon &

exec /usr/bin/metacity

Metacity allows me to configure keybindings using gconf and setting the /apps/metacity/global_keybindings and keybings_commands keys. So I have bindings for my xterm, for starting the Emacs client, for Conkeror, for gRun (a GTK run dialog to replace the one I lose by not using GNOME), for hibernating, and for logging out (just a script which quits Emacs and kills metacity).

I've also tried using gDesklets just to get a pretty clock and weather report, but I'm not sure these are really necessary.

One step closer to desktop heaven...
linkpost comment

Dynamically Typed Languages and Teaching [Dec. 27th, 2009|06:01 pm]
[Tags|]

Having worked with undergraduate programming students for a short while now I've found that data types seems to be one issue that gets some students confused. This leads me to wonder whether teaching with dynamically typed languages may be better?

It seems to make sense that relieving students from having to worry about learning what data types are and from the extra opportunities for mistakes arising from having to declare variables and type in data type names should benefit them in learning to program; it's one less thing to consider when they're learning to use control structures and procedural abstraction. But on the other hand, there's potentially a lot more subtle potential for confusion when languages are still strongly typed (as most candidate dynamically typed teaching languages are, Python, Scheme, Smalltalk). Student programmers will still do incorrect things with variables which will cause errors in any strongly typed language. If that language is also dynamically typed, then the consequences of those mistakes are merely commuted from compile-time to run-time. Run-time type errors, I think, are probably more confusing than compile-time type errors. Therefore it makes more sense to me that students should learn about data types early on in learning to program, that a sound grounding in data types is as important (or even more) than one in procedural abstraction and that a language that promotes good understanding of data types by being very explicit about them is probably a better choice than a dynamically typed language.

Now, where's my Haskell text book...
linkpost comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]