Friday, February 1

In a welcome change from my usual schedule of molecular cell biology practicals which involve using pipettes to move bits of liquid from one place to another and centrifuge them, I instead had a two hour bioinformatics practical this morning. Bioinformatics is basically the use of computers and other mathematical techniques to analyse biological information, usually but not always in the form of DNA sequences.

As I navigated my way through the crumbling remains of the Department of Genetics building in Cambridge, passing overloaded trolleys and walls with holes in them, I remarked to myself that it was surprising that they still managed to do decent research there. I made it to the computer lab on the second floor fine, and it immediately struck me as looking like no computer lab I'd ever seen before. It was a medium sized room with a motley assortment of ageing SGI and PC hardware lining the walls, all running Linux. Small plastic figurines decorated all available surfaces, I could see at least two fridges humming away along with three coffee machines and countless genetics and computer books sitting on rickity bookshelves.

There was a single non-student there, tapping away on a Linux box, with a resplendent dark beard, black T-shirt and mournful blues music piping out of some computer speakers. I noticed that there was a lot of blues music in the room. The non-student - the instructor - alas, was not wearing a Linux T-shirt.

The purpose of the practical was relatively straightforward, we had to compare a given protein sequence with a library of known protein sequences from other animals to determine homology and possible function. What I found really fun though was that I'd finally found a non computer science place in the university that was home to a few true Linux fanatics. For example, he went into a long digression about the use of conflicting standards in bioinformatics programs - e.g. how different brackets do different things, and bitterly talked about the need for standardisation. I nodded sagely, recognising the sign of a veteran unix hacker.

It was interesting to actually use the genetic comparision programs and see how simple and yet powerful a tool it was. To any experienced Perl or C programmer, the stuff we were doing was kids play - looking for particular patterns or probabilities - but then most biologists don't really know about what possibilities are available. I had a bit of fun looking at the homology between human and chimpanzee protein sequences and seeing the sad state of secondary structure predicting algorithms. Roll on IBM's Blue Gene, say I.

And then during another talk, he bemoaned the slow performance of a particular Java program on his beloved yet ageing SGI Indy and PC Linux boxen, and then promised that next year's class would have a far more advanced experience. While showing us an open source program he was using to demonstrate how you could find the 'turning point' in a membrane-spanning protein, the thing went mad on him and was clearly bug-ridden. He then began a muttered diatribe, in addition to an earlier one he made about another program which couldn't do multiple outputs simultaneously. "We've told him about the problem, but he won't listen, so we've just given up."

Ahh... open source programs, SGI boxes, programs that use lots of impressive looking DNA sequences and make pretty graphs and Linux hackers - what more could you ask from a practical?

0: good or bad? / forum / 05:01 pm GMT

Thursday, January 31

If you ever want to silence an experimental psychologist, ask them the question, "How do I reject candidates in the tip of the tongue state with only partial information?"

Actually, you could probably silence anyone by saying that, but not for the right reasons. So what does it all mean? Well, what I'm really talking about here is the issue of recalling memories. I'll start from the beginning. There used to be two schools of thought among psychologists about how we recalled memory.

The first school believed that we could remember data via an indirect method, using 'extra experimental information'. The second school believed that that was categorically not possible, and you could only remember data directly.

What do I mean? Let's imagine you are given the words 'time' and 'blue' and you have to remember them. Let's say that when the time comes, you can remember the word 'time' but you can't remember 'blue'. The experimenter takes pity on you, and says, "OK, I'm going to give you a clue. The clue is 'green'." Suddenly, you say, "Aha, yes, I remember now, the word is 'blue'."

What some people believe has happened here is that the 'retrieval cue' of 'green' sets off the node in your brain that represents 'green'. This node gets excited and sets off all the other nodes that it is connected to, in varying magnitudes according to the strength of its connections to them. So right then, the nodes for 'grass', 'red', 'peas' and 'blue' all get set off. All the nodes which are set off are examined for suitability to the present task, i.e. trying to remember what 'time' goes with, and 'blue' turns out to be the most suitable.

I've just described the indirect method of memory retrieval, indirect because you aren't retrieving the 'tbr' (to be remembered) item directly, you're retrieving something that is linked to it first.

Various studies were carried out on this by Bahrick (1970) and they were all very convincing and people said, yes, the brain must be able to perform indirect retrieval! But a guy called Tulving absolutely disagreed with this, which was quite daring since everyone was already convinced by Bahrick. Not that daring though, since Bahrick had some pretty fine experiments himself.

Let's say you're doing an experiment just like the first one I described, but you have to remember the two items 'dirty' and 'city'. Yet again, when the time comes, you can't remember either. To give you a clue, the experiment says, "Think of the word village - that's a clue." You rack your brains, and then you come up with 'city'. Well done.

But there are dozens of other people doing the test, and some of them still can't remember 'city' even when given the 'village' clue. In addition, another set of people who can't remember 'city' are asked to perform a free recall, in which they just say any words that they think could possibly be the one they're looking for. The results were surprising; the people who were given the 'village' clue did not perform significantly better than the people who weren't given any clues - the free recall people.

According to the indirect access model, that doesn't make any sense! They've just been giving a clue - a clue which is a cue, a retrieval cue, since theoretically the 'village' node should be connected to the 'city' node. So Tulving declared this to be the death of the indirect access model, and he made something called the Encoding Specificity Principle. In short, this principle stated that any clues you give people in trying to remember items will only be useful if those clues were given at the same time at which the to-be-remembered item was first remembered. So the clue of 'dirty' would work well, but only because you were exposed to it at the same time you were exposed to 'city'.

Tulving had another experiment which was just as convincing. People decided that indirect access just couldn't work.

A decade later, Jones (1982) made a classic experiment where he overturned the Encoding Specificity Principle. It was a very simple and elegant experiment. He gave subjects the following pairs of words to be remembered:

sleep - ORANGE

The first group were shown the words, and then they came back a bit later and were shown the words 'sleep' and 'tide' and were asked to remember the other words in the pair. If they couldn't remember, tough, they weren't getting any clues.

However, if members of the second group couldn't remember the other words, they were told by the experimenter, "Here's a clue. Look at the words 'sleep' and 'tide' and read them backwards." Ah, you see - they spell out 'peels' and 'edit'! Those crafty psychologists, eh? As you might expect, that clue allowed the second group to perform far better at recall than the first group.

Why does this overturn the Encoding Specificity Principle? Because the clues they were given were not given to them at the time of when they were presented the original information. The clues are what are called 'extra-experimental information'. As a result, the ESP was disproved, all of its followers were cast into confusion, everyone else was really happy, etc etc.

"But Adrian," I hear you cry, "this doesn't bring us any closer to the stuff you were babbling on about tongues and partial information!"

Ah, but it does (in a way). Imagine you're trying to complete a crossword, and you come across the clue, 'Lying on ones back, with the face upward.' You know the answer, it's just on the tip of your tongue, but you just can't get the word out. A friend comes along, looks at it, and says, 'Is it sleeping?' and you say, 'No, it's not sleeping'. He says, 'Does it begin with the letter T?' and you say, 'No, I know it definitely doesn't begin with the letter T.'

Hold on a second though! How on earth are you doing this? You don't know what the answer is, yet you still can somehow reject candidates for the answer? Well, I bet you'd like to know how it is that the brain can do this remarkable feat of having partial information about something and simultaneously know exactly what that something is not.

So would I - no-one has figured out the answer yet. And that, readers, is why I find experimental psychology and in particular, cognitive neuroscience, so interesting.

Incidentally, the answer to that crossword question is 'supine'.

0: good or bad? / forum / 11:59 pm GMT

Wednesday, January 30

If anything, Vanilla Sky has certainly got one of the best filmed trailers I've seen for a while. It's one of those trailers, set to perfect music, that make you want to find out more about the film. And it's also a trailer which in reality bears pretty much zero resemblance to the actual storyline of the film. Not that this is a bad thing at all, because I suspect that if the trailer showed people what the film was fundamentally about, it would both give away its entire premise and simultaneously reduce its audience quite considerably.

The film starts with our young playboy, David Aames, living the typical laid-back lifestyle and having casual sex with Julie Gianni. One day, David encounters Sofia, and immediately falls in love. Julie goes completely crazy, becomes a stalker and drives herself and David off a bridge. This is not a spoiler - I'm only telling you the first fifteen minutes of the film and you'd see just as much from the trailer.

So the question is, what the hell is the rest of the movie about, then? The trailer makes it seem as if David is being set up for Julie's murder. I can tell you that this is not the case. In fact, you probably can't imagine what the case is. The most I want to say is that it's about how David copes with his life after the car crash, and how he appears to be having problems with his sanity. It'll make you think.

I described this film to a friend of mine as, 'A bit like AI - you'll either love it or hate it. But with Vanilla Sky, other people besides myself also love it.'

I'm not prepared to spoil this movie here, although I'll probably comment on it in a thread on the forums. I'm glad I didn't read any spoilers, as that would have detracted my enjoyment of it by a fair amount. Oh, and it's got great music. I judge a film's music quality by the number of its tracks I download the next day. The usual is one or two. This time, I downloaded five.

2: good or bad? / forum / 05:11 pm GMT

Tuesday, January 29

How to double your memory
(also: Why it's worth going to Experimental Psychology lectures)

Imageable words such as brick, cup and dog have shown to have superior memory recall to that of non-imageable words such as love, silence and nostagia. This isn't necessarily anything to do with the fact that the brain can utilise the visual cortex for memory in those cases, it's mainly because imageable words have richer semantic representations. Therefore, if you treat memory as an autoassociative network (a network of nodes that can all link to each other), those words with more connections will have improved memory recall.

We can also see this in the cases of the 'peg-word' system, which is really a mnemonic system. Imagine you have to memorise a list of ten words, which might or might not be ordered, for example, cup, dog, lamp, etc. What you do is to link those words to your 'peg-words', an example list of which is below:

One is a bun
Two is a shoe
Three is a tree
Four is a door
Five is a hive...

So for cup, you'd imagine a bun in a cup. Or a bun next to a cup on a plate - whatever. For dog, you might imagine a dog wearing shoes. For a lamp, you could see a lamp with a picture of a tree on it.

Does this really work? Definitely. You get about a 40% increase in memory recall. This falls slightly when you have to do an 'interference' task, such as using a laser pointer to follow a moving dot on a wall, suggesting that this visual mnemonic system uses the visual areas of the brain, as you would expect. Mnenomic techniques such as these are best for arbitrary data though, not for meaningful information where organisation is the key.

Visual memory is pretty great, really. Psychologists in the past have discovered that if you have interacting images, it's even better. So if you want to remember, say, a piano and a cigar, the best thing to do is to imagine a cigar lying on a piano, not just a cigar and a piano separately. These psychologists also used to think that the more bizarre the image, the better the memory retrieval would be (e.g. a piano smoking a cigar). This turned out to be untrue - bizarreness (if such a concept could be measured quantitatively) has no effect on memory recall.

Another good and well known method is that of loci. It's also known as the renaissance museum model or something. Basically, you imagine the things you have to remember as having spatial locations. So for our cup, dog and lamp, you could imagine a cup sitting at the door to your house, a dog waiting at the gate of the path leading to your house, and a lamp sitting on your car outside. For this, you can either pick a well-learned route (e.g. drive to work) along which to scatter your objects, or you can imagine the rooms of your house holding the different objects. Again, this method is really superb and can show improvements of 50% to 120%.

The other issue in memory is spacing effects; should you cram all your learning into one block, or spread it out? Well, unless you are looking for short-term retention (immediate recall) then you should aim for spaced study. An ideal timetable might look like learning some data, then recalling it two minutes after first learning it, then 30 minutes, then two to three days, then a week. After that, the data should theoretically now be in your long term memory. Hurrah! If you do this, you can nearly double your memory retention and recall.

Finally, organise your data. Organising data which has to be learned significantly improves memory recall, even if you aren't doing it with the intention to recall! In fact, if you try to learn some data without organising it, you could end up doing worse than someone just organising it and not even trying to learn it. Clearly, this works best with highly structured material which lends itself to a great deal of organisation.

In conclusion: Use imageable words. When learning arbitrary data, use mnemonic techniques. When learning structured data, organise it. In all cases, use spaced practice, aka 'expanding rehearsal'.

In other news, I finished reading Sophie's World and just bought a hardback copy of the Collected Stories of Arthur C Clarke (all of his seven published collections plus all his stories from the last twenty years) for 4. I've probably read most of the decent stories in it already, but I've always had a soft spot for Clarke and it'll be good to have them all in one book.

3: good or bad? / forum / 04:18 pm GMT

Sunday, January 27

Picture the scene: It's about midnight, and we've been playing poker for about three hours. Someone walks into the room containing myself and four other hardened players, and mentions that he's going to be watching Rush Hour 2 on his computer. We nod, and have a discussion about university bandwidth charges and the fact that my college is in its rightful place of number one among bandwidth users.

Five minutes of subjective time passes, and the same person returns, having finished watching the movie. All the players fall into a stunned silence. Is it really 2am? It is. Perhaps that explains the reason why we can't keep track of who's dealing from one hand to another, let alone within a single hand...

New invented adjective: declenched, as in 'when I heard what she said, I felt a bit declenched inside.'

Reading Sophie's World, I finally found out (or at least, was reminded of) the proper definition of an agnostic, i.e., 'one who holds that the existence of anything beyond and behind material phenomena is unknown and (so far as can be judged) unknowable, and especially that a First Cause and an unseen world are subjects of which we know nothing,' (definition from Oxford English Dictionary). In terms of God, where it is usually referred to, it does not mean, 'Well, I'm not sure whether there's a God or not,' or some such wishy-washy nonsense, it actually means, 'You can't prove or disprove the existence of God.' So I think I'm an agnostic, not an athest.

2: good or bad? / forum / 04:21 pm GMT

Powered By Greymatter