Cover image of e*
Society & Culture
Philosophy
Science
Natural Sciences

e*

Updated 6 days ago

Society & Culture
Philosophy
Science
Natural Sciences
Read more

Toward a view of mind that is embodied, embedded, experiential, evolutionary, externalist...: e*. Lectures and writings by Ron Chrisley, on topics largely in the philosophy of cognitive science, in a variety of media (audio, video, PowerPoint, pdfs, txt). Featuring PodSlide technology that allows you to view lecture slides at the same time as hearing the lecture, automatically, in-sync, on any device that plays video files - even your iPhone or iPod!

Read more

Toward a view of mind that is embodied, embedded, experiential, evolutionary, externalist...: e*. Lectures and writings by Ron Chrisley, on topics largely in the philosophy of cognitive science, in a variety of media (audio, video, PowerPoint, pdfs, txt). Featuring PodSlide technology that allows you to view lecture slides at the same time as hearing the lecture, automatically, in-sync, on any device that plays video files - even your iPhone or iPod!

Cover image of e*

e*

Latest release on Nov 14, 2013

The Best Episodes Ranked Using User Listens

Updated by OwlTail 6 days ago

Warning: This podcast data isn't working.

This means that the episode rankings aren't working properly. Please revisit us at a later time to get the best episodes of this podcast!

Rank #1: "After Philosophy": Introduction (part 1)

Podcast cover
Read more

The first e* post of the new academic year is a first in another sense. Previously, all my postings here have been research lectures, about my own work. This post is of a lecture I gave on October 17th, 2006 as part of a Theoretical Philosophy course on the pioneering Consciousness Studies Program at the University of Skövde, Sweden. That is, it is a teaching lecture (that I have been giving for a few years), aimed at third-year undergraduate students on a course primarily on Modern European (read "Continental") Philosophy. As such, it is not primarily my own work. However, given my rather skewed and limited knowledge of this area, proper scholars of this kind of philosophy will probably see more of me in this lecture than they see of the work of Derrida, Foucault, Gadamer, Habermas, Ricoeur, etc.


The lecture is almost entirely based on the Introduction chapter of After Philosophy: End or Transformation?, edited by Kenneth Baynes, James Bohman, and Thomas McCarthy, and so they deserve credit for most of the ideas presented. My contributions consist primarily in giving examples, and an extended, perhaps laboured, Bernstein-influenced musicological metaphor, that can be summarized in the slogan: "Kant is the Mahler of Philosophy".


This lecture makes poor use of the PodSlide format, going through only 6 slides in 40 minutes. It is actually only the first part of the lecture; part two, which is shorter, will be posted soon.


Media:

Nov 13 2013

Play

Rank #2: Finding aesthetic pleasure on the subjective edge of chaos: A proposal for robotic creativity

Podcast cover
Read more
This is a lecture I gave at Goldsmiths College in London on May 16th 2006 as part of a Workshop on Computational Models of Creativity in the Arts.

In the talk, I give the nine axioms that constitute my approach to creating systems that exhibit creativity. The axioms are:


  • Axiom 1: If you make your robot pleasure-seeking, and make creativity pleasurable, you'll make your robot creative
  • Axiom 2: To be a good creator, it helps to be an appreciator
  • Axiom 3: Let the robot experience output in the real world, as we do
  • Axiom 4: We won’t like what it likes unless it likes what we like
  • Axiom 5: An important motivator is the approval or attention of others
  • Axiom 6: Sometimes it is better not to try pursue novelty directly, but something that is correlated with it
  • Axiom 7: Let dynamics play a role in appreciation
  • Axiom 8: Patterns in one's own states can be the objects of appreciation
  • Axiom 9: The best way to make outputs in the real world is to be embodied in the real world
But you'll have to listen to the talk if you want to know what all that means!

Media:

Nov 13 2013

Play

Rank #3: In defense of transparent computationalism

Podcast cover
Read more
This talk, given on May 5th 2006 in Laval, France at the International Conference on Computers and Philosophy, was originally to be based on a paper I wrote in 1999, but ended up diverging from it substantially.

Abstract of the original 1999 paper:

"A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered."

I added to this by claiming that Gödel-style arguments don't show AI is impossible, but rather that the Church-Turing thesis is false. I rejected currently fashionable notions of pan-computationalism in favour of a view that makes having semantic properties essential to computation. I also argued that even if computationalism turns out to be false, it might still be possible for an artificial computational system to have a mind by virtue, at least in part, of the program it is running, since programming a computer not only changes it functionally, but also physically.

Media:

Nov 13 2013

Play

Rank #4: Counterfactual computational vehicles of consciousness

Podcast cover
Read more
Given April 7th 2006 in Tucson at Toward a Science of Consciousness 2006, this is really two talks in one. My attendance at the conference was made possible in part by grant OCG43170 from the British Academy; I am grateful for their support.

Abstract:
In a recent paper (Bishop 2002), Bishop argues against computational explanations of consciousness by confronting them with a dilemma. On a non-causal or weakly causal construal of computation, familiar arguments from Putnam and Searle reveal computation to be too observer-relative to be able to underwrite any law-involving explanation of consciousness. On a (much more plausible) strongly causal notion of computation (Chalmers 1994; Chalmers 1996; Chrisley 1994), computational explanations must advert to non-actual, counterfactual states and state transitions. Working this fact into versions of Fading Qualia and Suddenly Disappearing Qualia arguments, Bishop concludes that strongly causal computationalism cannot be physicalist, in that it maintains that two states may differ only their non-physical (i.e. counterfactual) properties and yet be phenomenally distinct. I rebut this argument by embracing the second horn, and denying that appeal to non-actual or counterfactual properties is at odds with physicalism; indeed, it is the lifeblood of normal, physical, causal explanation. I further cast doubt on the argument by showing that it is too strong; if it is correct, computational states could not explain anything at all, not even computational phenomena, let alone conscious experience. I show how computational states that differ in their counterfactual properties must, contra Bishop's characterization, differ with respect to some of their actual properties. However, I note that inter-dependencies between current experience and computational state may only be explicable by reference to explicit, counterfactual states rather than the occurrent physical states which realize those dispositional properties. This is shown to cohere with at least one understanding (Chrisley 2004) of the sensorimotor contingency theory of perceptual experience (O'Regan and Noe 2000), in which expectation is understood as a disposition to produce a computational state corresponding to the sensation one would have if one were to perform a particular action. I conclude by sketching some implications for the search for correlates of experience. The considerations arising out of Bishop's argument show that if computationalism is true, then the search for correlates will fail if it only considers occurrent non-dispositionally construed physical states at a time to be the possible correlates of the experience being had at that time.

References:

  • Bishop, J.M. (2002) "Dancing With Pixies", in Preston, J. & Bishop, J.M., (eds), Views into the Chinese Room, pp. 360-379, Oxford University Press.
  • Chalmers, D.J. (1994) "On Implementing a Computation", Minds and Machines, vol.4, pp.391-402.
  • Chalmers, D. (1996) "Does a Rock Implement Every Finite-State Automaton?", Synthese, vol.108, pp.309-333.
  • Chrisley, R. (1994) "Why Everything Doesn't Realize Every Computation," Minds and Machines 4:4, pp 403-420.
  • Chrisley (2004) "Perceptual Experience as the Mastery of Sensorimotor Representational Contingencies", abstract in Proceedings of Towards a Science of Consciousness 2004, p 119.
  • O'Regan, K., and Noe, A. (2001) "A Sensorimotor Account of Vision and Visual Consciousness", Behavioral And Brain Sciences 24(5).
Media:

Nov 13 2013

Play

Rank #5: Evolving concepts of creativity: A mirror, a tightrope and an inkblot

Podcast cover
Read more
A few hours ago I spoke at the second University of Sussex creativity workshop, "Evolving Views of Creativity". Speaking near the end of the day, my role was to synthesize what had been said, in aid of developing a consensus on what we at Sussex mean by creativity. For these reasons, this lecture will be of most interest to people at Sussex, but others may get something out of it as well.

My main point is that creativity can be seen as the result of maintaining a fruitful tension between:

  • Self and environment
  • The intuitive and the conceptual
  • "Blue-sky" and applied
  • Novel and familiar
  • Chaos and order
  • Disconnection and engagement.
Media:

Nov 13 2013

Play

Rank #6: Machine models of consciousness: An ASSC tutorial (part 1)

Podcast cover
Read more
Last Friday (June 23rd), as part of the 10th meeting of the Association for the Scientific Study of Consciousness in Oxford, Igor Aleksander, Murray Shanhan and I jointly offered a tutorial on machine consciousness. I started with a discussion of general philosophical issues, the approach of Aaron Sloman and myself, and Pentti Hakonen's model. Igor Aleksander followed with a description of his axiomatic approach, a demo of his system in action, and a quick survey of the work Franklin and Baars, and Krichmar and Edelman. Murray Shanahan took the third hour with a description of his own approach, showing how it unifies the Global Workspace approach of Baars with the Simulation Hypothesis approach of Cotterill and Hesslow. He also described Holland's approach, showing the latest videos of his spooky robot Cronos.

Some general information about the tutorial can be found at http://www.assc10.org.uk/workshops.html#A1.

Media:

Nov 13 2013

Play

Rank #7: Epistemic blindspot sets: A resolution of Sorensen's strengthened paradox of the surprise examination

Podcast cover
Read more
I am not officially a member of the Department of Philosophy at Sussex (I'm in the Department of Informatics and am the Director of COGS), so the fact that I was invited to speak at the Philosophy Department's Away Day on June 13th is evidence of the fact that the "HUMS Philosophers" and "COGS Philosophers" at Sussex maintain a good working relationship. I didn't want to talk on a very COGSy topic, so I chose to speak on what I take to be a solution to a paradox that Sorensen formulated in 1986. Sorensen presented it as a strengthened version of the paradox of the surprise examination, and claimed that neither his solution, nor any other purported solution to the usual version of that paradox, solves the strengthened version. The strengthened version is a generalisation of Kavka's toxin puzzle to multiple instances of the cycle of offer-intention-consumption of the toxin. My solution is to take Sorensen's notion of an epistemic blindspot and generalise it to the case of an epistemic blindspot set. I then show that the premises and conclusion of the reasoning of the subject of Sorensen's paradox form an epistemic blindspot set, which implies that that reasoning is not epistemically consistent, and therefore cannot confer knowledge, thus resolving the paradox.

Unfortunately, the audio recording levels were too high, so there is a lot of distortion; you may find this to be too irritating for the podcast to be listenable. Also, instead of a PowerPoint file of slides, there is a two-page PDF handout.

References:

  • R. A. Sorensen, A strengthened prediction paradox, Philosophical Quarterly 36 (1986), 504-513.
  • R. A. Sorensen, Conditional blindspots and the knowledge squeeze: a solution to the prediction paradox, Australasian J. Phil. 62 (1984), 126-135.
Media:

Nov 13 2013

Play

Rank #8: Painting an experience? How aesthetics might assist a neuroscience of sensory experience

Podcast cover
Read more
IULM University, Milan, hosted a European Science Foundation Exploratory Workshop on "Neuroesthetics: When art and the brain collide" on the 24th and 25th of September, 2009. In my invited lecture, I departed significantly from my advertised title, instead using my time to introduce the audience to five strands in my research related to the intersection of neuroscience/cognitive science and art/creativity:

  • Embodied creativity
  • Enactive models of experience
  • Synthetic phenomenology
  • Interactive empiricism
  • Art works/installations
Media:
Further links:

Nov 12 2013

Play

Rank #9: Sensory Augmentation, Synthetic Phenomenology and Interactive Empiricism

Podcast cover
Read more
Helena de Preester using the Enactive Torch

On Thursday the 26th and Friday the 27th of March, 2009, the e-sense project hosted the Key Issues in Sensory Augmentation Workshop at the University of Sussex. I was invited to speak at the workshop; my position statement (included below) serves as a good (if long) summary of my talk.

Media:
Sensory Augmentation, Synthetic Phenomenology & Interactive Empiricism: A Position Statement
How can empirical experiments with sensory augmentation devices be used to further philosophical and psychological enquiry into cognition and perception?

The use of sensory augmentation devices can play a crucial role in overcoming conceptual roadblocks in philosophy of mind, especially concerning our understanding of conscious experience and perception. The reciprocal design/use cycle of such devices might facilitate the kind of conceptual advance that is necessary for progress toward a scientific account of consciousness, a kind of advance that is not possible to induce, it is argued, through traditional discursive, rhetorical and argumentative means.

It is proposed that a philosopher's experience of using sensory augmentation devices can play a critical role in the development of their concepts of experience (Chrisley, Froese & Spiers 2008). The role of such experiences is not the same as the role of say, experimental observation in standard views of empirical science. On the orthodox view, an experiment is designed to test a (propositionally stated) hypothesis. The experiences that constitute the observational component of the experiment relate in a pre-determined, conceptually well-defined way to the hypothesis being tested. This is strikingly different from the role of experience emphasized by interactive empiricism (Chrisley 2010a; Chrisley 2008), in which the experiences transform the conceptual repertoire of the philosopher, rather that merely providing evidence for or against an empirical, non-philosophical proposition composed of previously possessed concepts.

A means of evaluation is need to test the effectiveness of the device with respect to the goals of interactive empiricism and conceptual change. Experimental philosophy (Nichols 2004) looks at the way in which subjects' philosophical views (usually conceived as something like degree of belief in a proposition) change as various contingencies related to the proposition change (e.g., how does the way one describes an ethical dilemma change subjects' morality judgements of the various actions in that situation?; cf, e.g. (Knobe 2005)). One could apply this technique directly, by empirically investigating how use of sensory augmentation devices affect subjects' degree of belief in propositions concerning the nature of perceptual experience. However, it would be more in keeping with the insights of interactive empiricism if such experiments measured behaviour other than verbal assent to or dissent from propositions, such as reaction times and errors in classification behaviour. This might allow one to detect changes in subjects' conceptions of the domain that are not reportable or detectable by more propositional, self-reflective means.

Are there rigorous techniques that can characterise the subjective experience of using sensory augmentation technology?
Synthetic phenomenology is 1) any attempt to characterize the phenomenal states possessed, or modelled by, an artefact (such as a robot); or 2) any attempt to use an artefact to help specify phenomenal states (independently of whether such states are possessed by a naturally conscious being or an artefact) (Chrisley 2009; Chrisley 2010b; Chrisley 2008). Although "that" clauses, such as “Bob believes that the dog is running”, work for specifying the content of linguistically and conceptually structured mental states (such as those involved in explicit reasoning, logical thought, etc.), there is reason to believe that some aspects of mentality (e.g., some aspects of visual experience) have content that is not conceptually structured. Insofar as language carries only conceptual content, “that” clauses will not be able to specify the non-conceptual content of experience. An alternative means, such as synthetic phenomenology, is needed.

Earlier (Chrisley 1995), I had suggested that we might use the states of a robotic model of consciousness to act as specifications of the contents of the modelled experiences. This idea has been developed for the case of specifying the non-conceptual content of visual experiences in the SEER-3 project (Chrisley and Parthemore 2007a; Chrisley & Parthemore 2007b). Specifications using SEER-3 rely on a discriminative theory of visual experience based on the notion of enactive expectations (expectations the robot has to receive a particular input were it to move in a particular way). Depictions of the changing expectational state of the robot can be constructed in real time, depictions that require the viewer to themselves deploy sensory—motor skills of the very kind that the theory takes to be essential to individuating the specified content. Thus, the viewer comes to know the discriminating characteristics of the content in an intuitive way (in contrast to, say, reading a list of formal statements each referring to one of the millions of expectations the robotic system has).

Just as SEER-3 models, and permits the specification of, experiences in a modality we naturally possess (vision), so might other robotic systems, equipped with sensors that do not correspond to anything in the natural human sensory repertoire, model and permit the specification of other experiential states. As with the case of visual experience, specification cannot consist in a mere recording or snapshot of the sensor state at any moment, nor even in a sequence of such snapshots. Rather, the specification must be dynamically generated in response to the specification consumer’s probing of the environment (virtual or real), with the sensor values being altered in a way that compensates for both the subjectivity of the experience being specified, and that of the recipient herself.

References:

Nov 12 2013

Play

Rank #10: Concepts and Proto-Concepts in Cognitive Science (part 2)

Podcast cover
Read more
As explained in the previous post, in August of 2010 I gave two lectures as part of the annual Summer School of the Swedish Graduate School in Cognitive Science (SweCog; see http://www.swecog.se/summerschool.shtml). The previous post contains the first of these lectures; this is part two. Near the end I showed a movie as a kind of dynamical specification of the non-conceptual content of visual experience modeled by the SEER-3 robot. This movie is not included in the PodSlide file; those interested in seeing it should download the supplementary file: "Non-conceptual content specification demo".

Media:

Nov 12 2013

Play