Monday, 22 April 2024

From "what is life?" to consciousness without cells: a science-inspired thought experiment

What is life? This question has been asked by many, from philosophers to molecular biologists, and is surprisingly difficult to define. There are some standard definitions that work well for day-to-day and practical scientific purposes. But once in a while, I wonder if there could be more. Whilst reading Siddhartha Mukherjee’s “The Song of the Cell”, I had a random string of thoughts which led me to some interesting ideas. I wrote it down and decided to make it into a short blog post to share this fun – and potentially philosophically relevant – argument with you.

Here’s how Mukherjee defined life. “To be living, an organism must have the capacity to reproduce, to grow, to metabolize, to adapt to stimuli, to maintain its internal milleu. Complex organisms have the… emergent properties… mechanisms to defend themselves … organs with specific functions… and even sentience and cognition.” He goes on to ponder asking an astrological being: “"Do you have cells?"… it is difficult to imagine life without cells.”

Could we imagine life without cells? I can certainly imagine it. The emergent properties that things with life have, including mechanisms to defends themselves and organs, could be implemented in other mediums. We are implementing incomplete parts-of-life using plastic and metals already, in synthetic organs.

Then, he goes onto the standard example: "…viruses are inert, lifeless, without cells". When I first learnt that viruses weren't considered "alive", this puzzled me. I now know that this is based on one of the standard definitions, where something alive must have the ability to maintain itself (amongst other properties) – a property viruses don't have. They need hosts, like ourselves.

What if you have an amalgamation-of-parts (AOP), like virus, that is conscious? Let's ignore the details of what sort of system is required for consciousness, and indulge me in a thought experiment. If an AOP had the ingredients for consciousness, that it could experience, express itself, but had to be tied to another organism, would it be considered alive? It cannot maintain itself and requires a constant external energy source. But if it were created – say, in our bodies – and gained consciousness as it developed, this could be considered a conscious-without-life being. 

You may now think, that's a weird conclusion! Clearly, the AOP is alive! It just requires a host. By this argument, a virus is alive. I am sure I am not the first to argue this, but my interests in (weakly) emergent properties in biology and mechanisms (Craver, 2007; Mok & Love, 2023, Mok, in-theory-forthcoming-opinion-paper) made me think this is particularly interesting. The combination of parts, organized in a certain way, can be a living organism, even if not self-sufficient.

A virus is alive, as long as it has a host like ourselves. But might it be more correct – though maybe more horrific – to say that we share life?


Inanimate consciousness

Up till now, this was an interesting thought with some – I believe – valid reasoning. I wondered if I should go on, as it gets a bit nutty and fluffy from hereon, but what the point of a blog if it's just all serious ideas? 

Many have asked the question of whether silicon-based life is possible. More relevant to cognitive scientists, whether silicon-based consciousness is possible. I used to think it's not possible for various reasons, and one of them is the problem of creating and implementing a biological system with "life" in silicon. However, the idea above raises the possibility of a combination of biological organism and silicon, where the consciousness may not even be a single combined consciousness, but rather a separate one in the silicon, independent from the hosting life, whilst ever reliant on it. This by no means speaks to the hard problem of consciousness. The point is simply that, if we extend the definition of life to typically considered inanimate things like viruses, we can logically extend consciousness to those same things – what one might call “inanimate consciousness”. 


An alternative view – artificial consciousness without defining life

Up till this point, I had my philosophy hat on. So let me put my scientist hat back on for a moment. In my day job, I try to figure out how humans are able to do all these clever things through studying our brains. This is done by identifying some cognitive ability I find interesting (a behavior or capacity to do intelligent things) and decomposing it into parts – abstract cognitive functions or neural mechanisms – that work together explain the cognitive ability.

Why is “life” so difficult to define, and perhaps more importantly, why don’t scientists bother to define it? Probably because it doesn’t matter – its definition is irrelevant to their research questions. As scientists, we are interested in some phenomenon at hand, whether it is the cell’s ability to divide or grow, or our ability to solve a crossword or play Go, we want to figure out how it works. Determining whether something is alive or not doesn’t help us.

How about consciousness? You might think that defining it doesn’t matter too. But in fact, it matters a great deal to the cognitive scientist interested in consciousness, and relevant for our thoughts about artificial consciousness. One approach to the science of consciousness is to figure out what kind of cognitive functions require consciousness and which don’t. For example, patients with Blindsight report not being able to (consciously) see in parts in the visual field due to lesions in their visual cortex, but if you show them stimuli in their blind field and ask them to guess if there was a stimulus, they are much more accurate than you’d expect (i.e., more than if you closed your eyes and made the same judgment). Through many clever ways cognitive scientists have devised and continue to devise, it seems possible to figure out which kind of functions require consciousness and which do not (i.e., it can be defined functionally). Indeed, this is how many scientists of consciousness work on this problem.

This means that that artificial consciousness and the scientific study of clearly is possible – in brains, bodies, and even robots – whether we consider them “dead” or “alive”.

Friday, 7 October 2022

The role of storytelling in the scientific process


“Man is the Storytelling Animal, and that in stories are his identity, his meaning, and his lifeblood…” - Salman Rushdie, in Luka and the Fire of Life


“Socrates’ speaks” – Louis Joseph Lebrun (1867)



We're told fascinating stories of science about how things work: how dinosaurs roamed the earth millions of years ago, how putting atoms together gives us gases like carbon dioxide or liquids like water, how genes determine certain biological traits… And they're particularly fascinating stories because they're factual – at least for me!


A common criticism that has come up in recent years is that scientific publishing has a bit *too much* storytelling, that the research that gets the most attention and published in the most prestigious journals, may not be based on the quality but rather on how well the authors sell the work.


This made me wonder: is good storytelling in science a bad thing?


In this blog post, I will discuss the idea that the role of storytelling in science goes deeper: rather than to be used as a device to report the discovery, storytelling might be deeply rooted in the scientific process itself, including theory building, creative hypothesis generation, problem solving, and in the discovery itself. That rather than regulating it, we should be encouraging it in the scientific process, but at the same time discourage the bad incentives and practices in science.



Good storytelling and bad incentives in academic publishing


The criticism is that scientific publishing has overemphasized the importance of storytelling. To get the attention of journal editors, the work must be novel, sound exciting to scientists, and make a broad impact on the field. To catch the attention of the popular press, it has to sound exciting to the public, and potentially have an impact on society. This means scientists have to sell their work, hard, even if the findings are not as spectacular as they claim. For those of us who know the pressures to publish, this is old news. Those who publish in big, "high-impact" journals get rewarded with prestige, grant money, and jobs. So the incentives for publishing in these venues are huge. This is not my focus, but it's an important issue to raise as this is typically what comes to mind when people talk about storytelling in science. It relates to the so-called 'replication crisis' in various empirical sciences, as people cherry pick results so they can publish, even when the data don’t support their conclusions.


Among the discussions that were happening (including on Twitter), I saw academics, often senior and high-profile professors, who spoke out with a different view: that good writing is important for science, and that a good narrative is key in a good article or presentation. In one sense this is obvious – if the writing is bad, where the ideas are incomprehensible and no one understands it, that's not great for science. It's only when it goes overboard (which it does), that this is a problem. And the field is working on this.

 

Whilst pondering this issue, I wondered: why is there such an emphasis on storytelling in science? And are there are positives to the fact that we’re often so obsessed with a good story?



Storytelling in science: more than just a good story?


I wondered if there might be something deeper to why we are obsessed with the idea of storytelling and a good narrative. It led me to think: could storytelling also be a crucial part of the scientific process? Almost all the time, we’re try put the pieces together and try to come up with theories of how things 'work', after which we turn to our instruments to test these theories. Rather than the role of storytelling exclusively as a reporting device, it also seems to be a key part of the scientific process. Maybe story generation is part of the process to produce explanations that allow us to consider the many different ways of how things might work. 


So, where does storytelling happen in the scientific process?


To take an example from my own field, we might see an article claim something like: we found that a brain region X is active when people successfully navigate a maze to a goal (e.g., playing a game during a functional MRI scan).  But how? What are the psychological and brain mechanisms that support navigation? The raw data (MRI images) can’t tell you anything. Before thinking about how the data is analyzed, we must step back. We might consider more basic questions: How is navigation possible? We probably need some cognitive ingredients: short-term memory, self-localization, long-term memory, a mental map… and there must be a precise way these things are put together. Then you might search for these things in the brain and if they come together to create the behaviour. Maybe you’ll find short-term memory representations of the goal location in some brain region. Maybe there’s a hint of a ‘map’ being formed during learning, and the brain uses that map to navigate to the goal. Maybe various brain regions responsible for short-term memory and goal representations work together.

 

Even from this simple example you can see you need to work to put things together creatively, to make sense out of it. In a compelling report, the authors often tell you a story, the big picture and all the little stories of how the pieces fit together. We need a theory – a coherent story, of how it's even possible to solve this (navigation) problem. We might even formalize it in a computational model (a set of equations that implements the underlying processes: e.g., a short-term memory store, a map construction process, combines them and outputs the directions). Then we might try to find the pieces (cognitive constructs, or model processes/variables) in the data. Some argue that we should "just look at the data". Though data can provide hints as to what the pieces are, and they can help modify our theory, we still need to generate the story (or theory) to construct a satisfying explanation. When we do science, we want a satisfying explanation. I’d go as far to say we want a convincing story of how the phenomenon at hand arises.

 

Whilst writing this I realized a lot of it is about theory building, which is a crucial part of science (e.g., van Rooij & Baggio (2021) and  Guest & Martin (2021) – who I have been highly influenced by). But here I’m emphasizing the creative, story-generating, putting-the-puzzle-together side of the picture that extends beyond theory building to data analysis and the writing process. As people say, it's difficult to teach creativity (impossible?), and many will agree that scientific discovery requires creativity. But what does that mean? I think it might be about how well we can generate theories, how good we are at telling ourselves a story, changing the plotlines, consider the ways in which the pieces are put together, adding new pieces, and testing them. And formalization can help, as you can build a model that implements your theory, and test what happens when you change the way the pieces are put together.

 

The key is not to get too stuck in your own theories and ignore *all* data. It can sometimes be valid (like when it's not reliable or you can't be sure; one example is Crick and Watson when working on poor images of DNA. To paraphrase from memory: when data gets in the way of theory). But sometimes it's not – and it's up to us to figure it out. Good methodology and statistics help, but in the end it's up to us. Science is not simply about data analysis, but thinking, theorizing, and creative mental storytelling (possibly with a good dose of daydreaming) – for good science to be possible!



Final thoughts: Destined for the garden of forking paths?

 

Most likely, our desire for a good story extends way beyond scientific practice. As the quote at the top suggests, storytelling is in our nature – it's how our minds work! We always try to find an explanation for everything, even if we sometimes get it wrong. We like stories. They make things easier to understand, easier to remember, and much more interesting. That's why good books and films must have a good narrative that both engages the audience, and to help us understand what’s going on. 


When we ask questions during the scientific process, we also strive for satisfying explanations. And often that means there should be a story to tell, and one that is engaging. In Cognitive Psychology and Neuroscience, we create theories – with mental constructs, or cognitive capacities, algorithms, neural mechanisms, which are aimed at providing good explanations for our behaviour – multi-part stories that help us to comprehend the issue at hand. When done well, theories can let us predict things and make causal statements. And the actual building of the explanation is key.  


Thinking about the process and the creative aspect of theory building, where we try to construct a story, a narrative of how things happen, really makes me think that it is a bit like creating a universe in our heads, like how writers imagine their fictional worlds. And we can consider many, many hypothetical universes. So let us embrace our storytelling animal in the scientific process! Let us dream a bit, consider multiple forking paths, and once in a while, it might lead us to a genuine, exciting discovery.


Good science requires good storytelling. We should be firm on good methodology and statistical inference, but the story is equally important. It must flow, make sense, and be a satisfying explanation – from the process of theory generation, to understanding the data through hypothesis testing and data exploration, to the final version of the published article. As of any good story, it should be engaging and helps us understand the world that the writer is trying to portray. Except that instead of a fictional world, it’s the fascinating world we live in.

Friday, 16 September 2022

Cognitive Computational Neuroscience 2022: Thoughts and Hopes for the Future

 














CCN San Francisco: my first international conference in 2.5 years. After a hectic summer (and year!), getting covid for the first time a few weeks before the conference, I was tempted to skip it. How glad I am that I didn't!

 

The conference exhibited an impressive range of topics, but somehow still retained a sense of coherence. From experimental work in perception to decision making, with data types ranging from behavior to neuroimaging and neurophysiology in rodents to humans, many of which were accompanied with some modelling approach. These also ranged from cognitive models to varieties of deep and recurrent neural networks. It didn't feel like there was a bias toward or away from any model organism (species) or computational modelling approach - as long as you’re working in the cognitive or computational mind or brain sciences, you're welcome here!

 

It felt cutting edge, relevant, and at the very least, full of interesting work for cognitive neuroscientists with a computational bent or ML folks with a cognitive/neuroscience bent. As cognitive or neuroscience conferences often have a strong empirical bias, and computational cognitive or neuroscience conferences sometimes focus more on certain types of models or overemphasize animal work, CCN fills an important gap in the cognition-computational-neuroscience conference scene.

 

Brief personal highlights

 

Some highlights of mine which also illustrates the breadth of the conference:

 

In Chelsea Finn's talk, I learnt that getting actual physical robots (with a deep neural network; I believe deep reinforcement learning) to learn tasks and generalizing in real life is *much harder* than in a simulated environment (e.g., Go, ATARI games). Tasks included training to pick up an object and dropping it in a designated area. Testing on the same object doesn't give 100% accuracy, and when robots were tested on a novel, different color/shaped objects and environment, they failed a lot! I think overall performance was 10-30% if I remember right (lower than you'd expect anyway). Very interesting to know how far we are from human-level AI in the real world.

 

Talks on "Drivers of learning across timescales: evolutionary, developmental, & computational perspectives" - the evolutionary perspective was different to our standard cognitive / computational / neuroscience talks. Interesting and intriguing.

 

I joined the tutorial "Varieties of Human-Like AI" a very good tutorial with code on the basics of RL from Q-learning to successor representation to model-based RL in simulated environments like grid worlds. A bit fast but not unusual. Some nice suggestions were to have longer tutorials or hackathon-style days where people could hang out to chat or code whenever they wanted to.

 

Innovations

 

I have only been to CCN once – the first one in New York City in 2017. It was very nice, but it was mostly a standard conference. Over the last few years, I heard about the new innovations at CCN (and Neuromatch), but have not been able to attend. Joining the conference with some skepticism, I can say that I felt the 'innovative' bits were generally good and purposeful. I also think it took a dedicated group to make it so – if it were just a large old conference trying to do a few new things, it would probably end up gimmicky. As I am typically a passive participant at conferences, and only participate if I have to, I was skeptical of these ideas. But it ended up being rather fun.

 

Mind matching - their algorithm (by Neuromatch, and of course they’re now using a large language deep neural network model!) matches people with similar interests. We provided several abstracts and the model processes the text and matches us with other attendees. 3-4 people join a table, get ~25 minutes to chat, then go onto the next table. 3 tables each. From what I heard, sometimes it was a bit random, other times “too good" (with people you knew – there’s an option to not match with specific people; though you have to know in advance they’re here…), but all in all it worked pretty well. I met a few interesting people, and there were 3 (out of 11) who I might not have met otherwise, and we chatted in the conference and over drinks (there were others but I would've met them through friends).

 

General Adversarial Collaborations - I won't say too much about these as there's an explanation of them here: https://gac.ccneuro.org/call-for-proposals. What I did find interesting was in one, there was time to discuss the questions posed as a small group (everyone was assigned a group in the audience). As the questions were stimulating and interesting, our group had some fruitful conversations about the definitions of mental simulation: does it have to be sequential, we probably do simulate but it depends on time/energy constraints, and we do simulate but can be very bad at it! It would've been even better if there was more debate on stage, but no one can control this (and quite a few zoom panel participants made it more difficult).

 

Reviewing submissions - this was an interesting process 2-page submissions with a scoring system to select talks. Reviewers had to comment on the submissions (I had 7) on their potential impact and clarity, and produce ratings, which were normalized and aggregated for talk selection. Most interestingly for me, we got feedback from a range of reviewers who ranged from out-of-your-field to expert (within the CCN community). Normally we only get reviews for experts of some sort on your topic. It was interesting to hear comments from others in the more ‘general’ computational fields, and what aspects of the work were unclear or interesting to the non-experts. This is feedback we don't normally get.

 

I suppose one criticism of using this system to select talks is that the most highly rated ones are likely submissions where the authors could convince researchers from a large range of computational approaches both in and out of their field that it's important. This probably leads to a bias away from interesting approaches that are a bit more niche or different or dare I say, ground breaking, in comparison to more established and well-known approaches, and certainly labs. To be fair, this is also often true with other reviewing systems. Said that, it's a very cool system, and what's innovative ideas for if not to keep having and improving them? Huge credit to CCN and the team who did this (see https://meadows-research.com).

 

A half-baked idea: [walking tour-inspired] multiple poster presentation starting times

 

With all the innovation, I couldn't help but notice one aspect that was traditional and suffered from old problems - the posters! I am sure others have felt this before - presenter presents to one person, then others join halfway. Sometimes people can get the gist and follow, but often they can't. Some decide to come back later, but often come back to find the presentation is mid-way again! Plus, as the presenter, you often want to see other posters in your session, as these are probably the most relevant ones!

 

The idea is inspired by how walking tours works – with multiple tour starting times to guide each group. People would arrive just before the start time so it's possible to wait for a few people to gather before the presenter starts. Note that this is not the same as a time slot for when the presenter is around, but the presentation *start* times. Of course, it should be flexible, e.g., it might take longer than you think. Ideally, there’s a conference-wide system where start times are staggered across posters (maybe randomization is fine, with future plans for an algorithm!). I acknowledge that sometimes the 1-on-1s are great, but this can also happen, such as when it's not a designated “tour” in between start times, or after the session.

 

This would make it more likely for the presenter to present to multiple people at once, rather than give multiple 1-on-1 presentations, which can be exhausting. It also lets the presenter take a break and see other relevant posters in their session (which everyone wants!).

 

Issues

 

Some minor issues were raised which many people agreed with. For example, there needed to be more social time, maybe a social night - this can be organized by a group of students and maybe postdocs, who would know what's fun. Maybe lunch each day could be provided so people could stick around, or at least organized somewhere.

 

There were probably too many zoom speakers and discussion panelists. Maybe it's a post/peri-covid phenomenon, and hopefully it'll get better. We should be flexible for speakers, and it makes sense when people really can't come (e.g., family, unexpected issues) but it felt like there might’ve been a few too many this time.

 

Future of CCN

 

Finally, I look forward to the future of the conference. It started with great promise, and has since been growing and maintained by a group of passionate and dedicated young leaders. As many have heard, CCN got into a bit of financial trouble. As a young conference, having covid suddenly stop everything meant they apparently suffered a bit financially from cancelling the 2020 conference. What's clear is that the organizers are now doing their thing, planning to make the next CCN work and many more in the future.

 

My (probably naïve) ideas on keeping the conference going with healthy finances in the near future (inspired by others at the conference): there could be a limited number of places for where the conference is based (for now at least). For example, a particular venue or hotel (get deal over a few years) or university with a big center. Another is pairing with other conferences – people said that pairing with Bernstein at Berlin worked well, boosting interest for both events.

 

All in all, it was a great conference, and I truly hope it will continue for many years to come. Major thanks to the organisers and the sponsors! Hopefully current and potential sponsors can see this for what it is – a truly special conference with a bright future. To paraphrase the saying: if you build it and keep it going, we will come!

 



(Picture credit to Laurence Hunt at his great closing speech)






Saturday, 26 March 2022

Reflections on cognitive neuroscience as a young science: is our field at a critical period of development, struggling to mature? (what happened, and thoughts after my talk at the Trinity College Science Symposium)

I recently had the opportunity to present at the Trinity College Science Symposium. This was the annual symposium of an undergraduate student-run society (Trinity College Science Society), where it was to be filled with speakers from all levels in the college - from undergraduates to professors. As I looked forward to the event, I anticipated a wide range of students, postdocs, fellows (yes, there's a difference in Cambridge ;) ) in the audience, and even a few professors across several fields. It is a general science society after all. Not thinking too much about it, I planned to give a standard talk I’ve been giving recently, maybe with a slightly longer introduction.

When the programme arrived, I had a glance at the talk titles and thought it was pretty heavy on the physical sciences. I realised that made sense, as Trinity has a strong history in physical sciences – it's where Isaac Newton, James Clerk Maxwell, Niels Bohr, and many others were. After the first few talks, my suspicions were confirmed. Not only were many of the talks in the physical sciences, they were technical. The students seemed to follow the talks, or at least were engaged. The content of the talks was great. It also struck me how much foundational knowledge could be taken as a given in these fields (at least within a particular framework). The precise models and predictions reminded me of how mature some of these sciences were, and in contrast how young we (cognitive neuroscience) are as a field. And it made me wonder how much more we need to do to mature as a science.

 

Figure 1. Isaac Newton Statue in Trinity College Cambridge Chapel. See http://trinitycollegechapel.com/about/memorials/statues/

 

I was about to give my talk. I realised that this was an entirely different group of people to those I was used to talking to – they were not psychologists, neuroscientists, nor computer scientists, and they were mostly undergraduates. If I gave my standard talk, I probably wouldn't be able to explain the work properly nor portray why it's interesting, and if I went into detail it would've been too field specific. Either way, it'd be boring to this audience.


I decided to do something I've never done before in such a short time frame: change my talk. (Some context: I am not a naturally good speaker, and have hitherto planned all my talks, so this could’ve been a very bad idea).


I thought: what should I say to a group of physical and natural science undergraduates (I checked at the beginning of my talk) and researchers? Their fields are mature. Cognitive neuroscience, on the other hand, is a relatively young field. However, we are in a very exciting time where a lot of work is being done. So, I thought I'd talk about how young our field is compared to the physical sciences, and therefore also how exciting it is! We are trying to understand our own minds, and we are only at the beginning. Such a talk could pique the interest of a physical or natural sciences student who had never thought of pursuing this kind of research. In the worst-case scenario, it’d be entirely different to all the other talks of the day so far, which might be a nice change.


I first checked: “A show of hands: students in the Physical sciences?”. Probably half the audience. “Natural sciences?” – the other half. Suspicions confirmed, I decided to change the talk as planned.


I kept the core content of my talk as an example of the research our field does, but started with a long introduction. How do we study the mind and brain? Through behaviour, modelling, neuroimaging, and (single-cell) neurophysiology. But how young our field is compared to the physical sciences! I mentioned how functional MRI studying cognition (i.e., with people doing tasks in the scanner) only started in the 90s. How these neuroimaging techniques are the only ways to peek into the live, working brain without cracking the head open. That we are still in the process of figuring out how best to study and understand cognition and the brain. We first found blobs on the brain, looking for brain areas that respond to mental constructs, even poorly defined ones. Or we were excited about things like the 'pleasure' chemical (dopamine is much more than that - e.g., see https://www.theverge.com/2018/3/27/17169446/dopamine-pleasure-chemical-neuroscience-reward-motivation).


But we're now pushing for high-quality, thoughtfully designed behavioural experiments in neuroscience, as well as building models to figure out which brain regions implement particular computational processes. We're getting more of high-quality data and even *big* data (human/animal behavioural data, neuroimaging datasets, multi-neuron recording arrays in animals, etc.), and there are strong efforts to build theories and computational models to mimic and find good explanations for complex cognition and the brain’s activities. The field may be starting to mature. Many of us are starting to use modelling approaches to explain the mind and brain – from cognitive models to spiking neural networks and deep convolutional neural networks (taking inspiration and advantage of the tools from rapid developments in computer science, machine learning, and artificial intelligence). We're also getting more and more physicists, engineers, computer scientists to join the field (possibly some of the students would be interested to do this!). I talked about my work on cognitive models for concept learning and how it relates to neural representation of space and navigation (place and grid cells) in the hippocampal formation (Mok & Love, 2019) – an example of how we can link computational models to different kinds of behavioural data and cognitive processes, as well as to brain representations from fMRI to single-cell neurophysiology data. I hope I conveyed how exciting the field is, and that it is exciting right now. Whether or not it was interesting for everyone, I couldn't say. But there were a good few attentive faces, and I got a few more (very nice) questions than expected at the end.


The brain is a magnificent organ, and the work surrounding how to understand it – cognitive psychology, [computational] cognitive science, artificial intelligence, neuroscience - is developing fast. These fields are relatively young and there is so much more to learn and do. For one, we need better theories, and better ways to test our theories. We are getting better. But we're also in a dangerous phase, where people throw in some equations in a talk to seem sophisticated or to get a paper accepted. As noted, we're not quite there yet.


These ponderings led me to recall stories from some popular science books I've read: they talked about critical periods when fields changed the way they approached their questions (e.g., biology: from taxonomy to mechanistic theories) or major discoveries that led to different perspectives and methodologies (e.g., the genetic code and the development of techniques to decipher and manipulate the code), and the lives and emotions of the people who were working in these young, emerging fields. How must they have felt? We might not be as close as they were, but I can’t help but wonder if we're in a similar transitionary phase, that history is being made right now. There are some tell-tale signs: we don't know exactly what we're doing or how best to approach our questions, many of us are blindly grasping in the dark trying to figure out what's out there, but there are informative signals here and there that give us hints as to what the elephant is like (https://en.wikipedia.org/wiki/Blind_men_and_an_elephant), or at least point us in the direction of the path for what a better understanding would be like. Our field might be at the developmental stage of a confused toddler, or perhaps a rebellious teenager, struggling to mature during a critical period of development. Maybe it'll take another decade or so. But if there will be a "critical" period for rapid and fruitful development of our field, I can't help but wonder, could it be now?

Figure 2. Blind men and an elephant. See https://en.wikipedia.org/wiki/Blind_men_and_an_elephant.

Thursday, 13 January 2022

Some arguments for basic science: a personal journey

 

In graduate school, I worried about the value of my research and the value of basic science in general – was there any value for knowledge that doesn’t translate to practical applications? At times, I questioned if I chose the right path. On occasion, I went through the reasons why basic science is worth doing in my mind, trying to convince myself I made the right decision. The main arguments I came up with were: (1) The findings might turn out to be important for something beneficial to society at some point in the future, and (2) the pursuit and contribution to the pool of human knowledge is a good in and of itself. (1) is a key practical reason we should do and fund basic science, and there are many fantastic examples of basic science that lead to (e.g., medical and technological) applications that have benefitted human society. However, I couldn’t shake the nagging thought that most basic science – including my own – does not lead to (1). From talking to other academics, many seem to think (2) is a good enough reason. This is perhaps not too surprising as many of us chose to pursue a career in research due to our personal interest in the topic, and we find joy in the acquisition of knowledge for knowledge’s sake. But I wondered, how much time and money is each piece of knowledge worth? And that’s if we do find anything at all (as some of you will know, many experiments don’t work out). Furthermore, I don’t think it’s immediately clear why knowledge is important in and of itself, nor is it obvious that everyone, especially those outside of academic circles, would agree. Over the last few years, I have arrived at a few reasons for doing basic science independent of the potential application of the acquired knowledge that I will lay out here. A large part of this will be an extension of (2), where I argue that it is not only a good in itself but has a real positive impact on society. I hope you will enjoy this personal journey that starts from my doubts in graduate school to a more positive outlook on the value of basic science.

 

I started out in graduate school bright-eyed and full of excitement, gearing up to go. Around the time I started (2012), the field of human cognitive neuroscience was getting quite a lot of attention. "The reward centres of the brain!", "Mind reading with a brain scanner" – were just some of the headlines present throughout the media, in science sections of major newspapers, pop-science magazine articles and even books. As I started my PhD, I learnt of the realities and limitations of brain imaging and experiments with human participants, and how the media might’ve exaggerated things a little bit. Still, doing research looking into people's brains as they were doing interesting tasks? I couldn't be happier.

 

As is typical in graduate school, there were ups and downs. Quite often, my mind was filled with self-doubt, worrying about the limits of my knowledge, … and so on. Out of these concerns, one of them kept coming back: "Does any of this matter?" If I produce a finding and publish it in a scientific journal, what good have I done? In the best-case scenario, some people in my field will see it, and they might even find it interesting. It might even make it into a popular science article. But working in basic science typically means that there is no immediate application of our research findings. There are success stories of basic science, where attempts to solve interesting problems based on intellectual curiosity led to applications far and wide. It's easy to imagine how basic research in physics, biology, and chemistry have potential applications in medicine and industry. In reality, many topics of research have little potential to lead to any practical use. If you're doing a PhD in basic science, chances are that this is the case. Using research funds (and often tax-payer money) made me feel guilty sometimes - shouldn't we be using it on something "more important"?

 

My PhD was on the cognitive and brain mechanisms of working memory (or short-term memory) and attention in young adults and in normal ageing. It was pure basic research with no immediate applications. I struggled with this a bit. Thanks to my supportive lab, I found a lot of joy in my work and training. I published a paper during my PhD, and it started getting cited by other researchers - noting how it's related to their own work, and even building on the work. In the meantime, I was writing my thesis, and had to read up on relevant literature. For example, I wanted to know which brain regions were more prone to degradation in ageing. My data were inconclusive. There were research groups that did magnificent work in post-mortem human brains, showing regions that decreased in cortical thickness and cell count (prefrontal cortex and hippocampus). How did ageing affect spatial attention? Again, a handful of studies on this topic. Many of these were simple experiments with a narrow focus, and typically published in lesser-known journals. Despite this, I was so happy that there was a graduate student or researcher somewhere that did the work, as they helped me to string pieces of knowledge together for a better understanding. Without these, my studies by themselves would’ve been woefully inadequate for any argument I wanted to make in my thesis. Each paper was a small piece of the puzzle, but together, they helped me build a broader picture and understanding. Then I realised – my paper was like this too – another student read it which might’ve helped them with their thesis, even if just a little bit. At some point, it might help someone write a textbook chapter, or help the clinical researcher who needs to know about normal aging. Although my work by itself is small and insignificant, in the short term it can help other researchers, and in the longer term it can help expand our field's understanding. This was enough to inspire myself a bit.

 

However, this "benefit" seems a bit too specific to academia and of less obvious value to society. So I asked myself: What value would I have seen in basic science before I became a scientist? Thinking back now, I can see that the results of years of scientific research were all around me. As a child, I loved dinosaurs. Ignorant of all the field work, technologies (e.g., carbon dating, radiometric dating), and many lines of geological and paleontological research that took place for us to learn of these magnificent creatures, they were part of my childhood. As a teenager, I started to learn interesting things about the world and the unintuitive ways scientists inferred these facts. How to tell the age of a tree, how heat is generated by the movement of molecules, and how cells, DNA, and different biological mechanisms work. I didn't realise how much thought and work had gone into these fields before they became 'established facts'. Recently, I’ve been reading popular science books in fields different to my own, which gave me some insight into this. It surprised me how a lot of our "common sense" scientific knowledge of the world has only been demonstrated relatively recently (compared to how long humans have been around). For example, experimental demonstration of germ theory was provided by Pasteur (1860s) with conclusive evidence in late 1800 (Koch). That electricity was the flow of individual particles of the same charge (electrons) was only demonstrated in the early 1900s (Millikan). And that the central unit of the nervous system is the neuron was proposed in the late 1800s by Cajal (the neuron doctrine) and only demonstrated in the 1950s (with the electron microscope). It is immensely difficult for me to imagine a time where people were seriously arguing about whether neurons have an important role in the brain’s processing, as much of neuroscience today studies the activities of these cells. The same applies to the ideas of the germ and the electron. Many of these findings now have major applications and benefits to mankind. But these scientists did not know this, nor did I need to know this to be inspired by these findings (and the stories behind them – I encourage you to check them out!). Furthermore, these discoveries would not have been possible without (what I thought were) the seemingly insignificant, individual research papers by the graduate student or early-career researcher – the literature that played a role to support the scientists which eventually lead to these major discoveries. It is said that the greats only became great by “standing on the shoulder of giants” – referring to other greats. However, I would say that the greats were also supported by the shoulders of the metaphorical ‘giants’, the community of scientists that have contributed small but solid findings that support the literature – without which no one could’ve made a significant discovery. Many of us will have learnt these facts in school, from popular science books, films, or even TV shows. For a long time I asked, what is the value of basic science on society? Perhaps a better question is: what role does basic science play in society? I realised that if we start noticing, it’s everywhere - it’s deeply embedded in our modern culture, from education to the media to the arts.

 

So is the pursuit of basic science and contribution to human knowledge in itself valuable and beneficial to human society? Is it a meaningful endeavour for the individual scientist, and for the society (and funders) that support it? Based on i) what makes life meaningful, and ii) the prevalence of basic science in modern-day culture, I will argue that it is. For most of us, there are people and activities that make life worth living. A lot of us love music and the arts. I used to think that artists are the ones that provide joy and meaning to humankind. However, I am starting to think that basic science's role in society may not be so different from the arts. We have best-selling popular science books, documentaries, and extremely popular science museums and exhibitions all over the world. The most interesting scientific discoveries can inspire anyone, anywhere. What's the difference between a mind-bending science exhibition that inspires, makes the mind buzz with ideas and excitement and an art exhibition that does the same? As I learn more about the world, I find that (scientific) facts are often stranger – and more fascinating – than fiction. Atoms as building blocks of matter, single cells as the building blocks of life and of making any of us – clumps of flesh with conscious and (somewhat) intelligent thought – possible. And we are the ones that came up with the sciences and the arts that bring us closer to understanding ourselves and the world we live in. Witnessing the ever-increasing science-related exhibitions and even science-inspired art, I have come to think that I’m not alone in thinking this.

 

So to the fellow basic scientists: it might be alright to be a little less hard on ourselves. Even without considering the immediate applications of our science, we are producing a body of work that will be useful for our colleagues, and will help build a deeper understanding of our fields of study. Furthermore, we are building a body of knowledge for humankind that will inspire new generations, allowing everyone – not only scientists – to understand the world we live in a little more. For this knowledge to be shared with everyone, it means we must do a good job in science communication across media types and across different parts of society (e.g., schools, press, books, etc.). In a sense, it’s amazing that we live in a society with institutions and funding bodies that support us to do such research and encourage its dissemination to the public. To sum up, I’ve finally convinced myself that the pursuit of knowledge through basic science is valuable to society, and is something worth doing for myself. Socrates said, "The unexamined life is not worth living". Though I wouldn't go so as far as that, I would say that a life dedicated to the study of ourselves and the world around us is certainly a life worth living.