You are on page 1of 6

A Brief Outline of Logical Positivism - philosophical movement of the 1920's and 30's spearheaded by the Vienna Circle, a group

of philosophers including Carnap and Schlick, who pursued an empiricist scientific philosophy in the tradition of David Hume (and more recently of Bertrand Russell and early work of Ludwig Wittgenstein); brought to the English-speaking world largely by A.J. Ayers's (1935) Language, Truth and Logic. - proposed science as the model for all knowledge and truth; saw philosophy's task as the formalization of scientific method and the analysis of language in order to clarify scientific propositions and thereby avoid misleading interpretations of its concepts and theories - asserted that the foundation of science was concrete experience; evidence about any empirical hypothesis (Hume's "matters of fact") must be publicly observable, i.e., available to anyone and unambiguously interpretable by anyone -- unlike, for instance, the data of introspection, which are available only to the introspector - the powerful apparatus of modern symbolic logic (the framework for what Hume had called "relations among ideas") was added to empirical observation to support the construction of sophisticated scientific theories - the meaning of a statement was taken to be its method of verification, the procedure for deciding the truth of the statement; any (non-analytic) statement which could not, at least in principle, be verified through observation, was held to be metaphysical and therefore meaningless -- which included statements about morals, theology, aesthetics, and most of Hegel's idealism (the nineteenth century's predominant philosophical world view) - closely related to verificationism was Bridgeman's (1927) independently developed principle of operationism, which said that theoretical terms which described unobservable entities (e.g., electrons) were to be tied to specific observations based on the actual physical operations used to gather evidence about them (e.g., emission lines on a spectrometer) - logical positivism's significance for psychology was the assertion that statements about mental states and operations would turn out to correspond exactly to statements about observable behaviors, so that one could translate problematic propositions about cognitive phenomena into scientifically testable propositions about overt actions - examples of problems that later emerged for the approach: in physics, a single concept like distance requires completely different operational definitions for "distance between two points on a yardstick" and "distance between two stars", leading to the strange conclusion that these are actually two different concepts; in psychology, attributing "having a headache" to a person who moans, holds his head, takes aspirin, etc., has exactly the same observational basis as the attribution of "faking a headache" -- thus making the distinction between the two states meaningless!

Neobehaviorism - modification, beginning around 1930, of the behaviorist approach proposed by Watson and Guthrie, to reflect growing sophistication in the attempt to account for knowledge in terms of observable behavior; four key elements were: (1) the influence of logical positivism and operationism (2) the use of animals in research, reflecting the confidence of behaviorist psychologists that their principles were sufficiently universal, well-established, and widely held that findings based on easily manipulated, controlled, and observed animals would be seen as generalizable to all instances of learning, including that of humans (3) an emphasis on learning as the crucial psychological process in animals, since it represents (for the empiricist) the source of every facet of behavior and provides the means for adapting to a changing environment throughout the animal's lifespan (4) a willingness to construct increasingly complex theories of behavior, using the concept of the "intervening variable" to represent cognitive abilities and other unobservables, and departing from the traditional straightforward connection, by some proposed mechanism, of observable stimuli and responses

Edward Chace Tolman's behaviorism methodological behaviorism - all data and evidence must be from observable overt behavior, but that doesnt imply that the overt behavior is all there is - behavior is all that may be productively studied, but that doesnt rule out the existence of minds; rejected mechanism (implied by the stronger metaphysical or radical Watsonian behaviorism) because even though S and R may be observable, the S-R connection is an unobservable abstraction - commitment to building behavior out of S-R units is therefore unjustified molar behaviorism - criticized Watsons (and Guthries) focus on molecular physiological responses at the level of muscle and gland activity, instead arguing that the larger scale molar description (like running a maze or going home) was more appropriate; MacFarlane (1930), Lashley (1924), and others showed that rats who learned to swim a maze could run through it when it was drained, and those who learned to run it could swim it when it was flooded: the behavior was different from the particular muscle movements used to perform it; later, Wickens (1938) showed that humans who learned to withdraw a finger from a shock also withdrew it when the hand was turned upside down, even though that new situation required a different muscle contraction to move the finger in the opposite direction; molar behavior has emergent properties not evident in the molecular description, just as water has emergent molar properties (wetness) not evident in the molecular properties of hydrogen and oxygen atoms or individual water molecules purposive behaviorism - argued that behavior has to be molar because its principle identifying feature is its goal or purpose: behavior reeks of purpose and cognition; Tolman objected to Thorndikes disconnecting behavior from its consequences - i.e., his view that reinforcement mechanically connects a stimulus and response, as if the response were not directed toward the goal of getting to the food/freedom/etc. in the first place; descriptive approach (not mentalistic): purpose is inherent or immanent in behavior - it can be objectively seen in a rats persisting in some behavior until some consequence results - no need to infer it from behavior as if it were an underlying mental cause of the behavior later theorizing (see Tolman, 1948) - saw cognitions as representations of the environment (e.g., cognitive maps) that exist objectively in the nervous system - they were thus accepted not simply as terms describing behavior but as real underlying determinants of behavior; behavior is the objective data that indexes mental phenomena such as the use of cognitive maps; cognitions are intervening variables in the theory, tied to observations of behavior Three research programs associated with Tolman insight learning (see class notes) - demonstrates that rats can behave insighfully by grasping the structure of a problem, rather than incrementally improving their performance. Provides evidence in favor of the flexibility or docility of behavior (vs. the mechanistic rigidity of Thorndikes S-R habit learning by trial and error) latent learning (see Tolman, 1948; also Hergenhahn & Olson pp. 298-300) - demonstrates that learning is not dependent on reinforcement; rather reinforcement has its effect on performance - it gives the rat a motivation to use what it has learned. Provides evidence for the purposive (vs. the mechanistic) conception of behavior. place learning (see Tolman, 1948; also Hergenhahn & Olson pp. 300-303) - demonstrates that rats learn the layout of a maze rather than a particular sequence of movements. Provides evidence for the molar (place learning) as opposed to the molecular (response learning) perspective, as well as for the development of cognitive maps. Tolmans use of intervening variables - experimental results are often expressed by an algebraic function: a learning curve shows a performance measure as a function of an experience measure; more generally, a dependent variable is predicted by some function of an independent variable: y | | |_______ x y=f(x) speed | | |_______ # reinf. speed = f(# reinf) DV | | |_______ IV DV = f(IV)

- some possible DV's: speed of maze running (increases); no. of trials to reach criterion (decreases); no. wrong turns / errors (decreases) - some possible IV's: no. of reinforced trials; no. hours food deprivation; incentive value of reinforcer; maze difficulty - usually just one DV is observed and measured to index performance (and therefore learning) - but performance must always involve the effects of many IVs, whether or not they are manipulated, controlled, held constant, and so forth; complete explanation of

behavior must take into account the many IVs at work and how they might interact to affect the DV. For example, a given number of past reinforcements might be sufficient to get a rat through a simple maze but not a complex one, or a long-deprived rat may run the maze for any kind of food reinforcement regardless of its incentive value (see below). (Tolman in 1940 was one of the earliest experimentalists to employ factorial experimental designs to investigate this interaction of IVs.) - the form of the function then is really something more complicated, like DV=f(IV1, IV2, IV3...): the DV is a function of many IVs

Tolman realized it would be more useful, and theoretically simpler, to express performance in terms of some specific combinations of IVs, instead of trying to specify how every IV would affect the DV when combined with every possible level of every other possible IV (an obviously endless, impossible task). These specific useful combinations of independent variables he called intervening variables, since they are not the directly observed IVs or DVs but a third type of variable intervening between these observations. Intervening variables represent useful summaries of the effects of IVs in combination with each other. Tolman chose them so that they could be mapped onto the informal mentalist vocabulary of common usage: cognitions, hypotheses, expectancies, and so forth were thus given operational definitions and became intervening variables. - this provided both a way to simplify the complex functional relation between the IVs and the DV (a strategy which Hull made use of), and, significantly for future developments in psychology, a way to allow the use of cognitive concepts while keeping them firmly anchored to observation through operational definitions; accounting for knowledge in terms of observable behavior could be done if knowledge itself were thus defined in terms of observable behavior. - for Tolman, a theory in psychology was a set of intervening variables - such a theory would give some concrete scientific meaning to mentalist concepts, and specify how those concepts together are related to change in an observable dependent variable. The following example, culled from various sources, carries the flavor of the intervening variable approach though it does not represent any specific account proposed by Tolman. [Note that the text example (pp. 304-308) treats each intervening variable as corresponding to just one IV; although faithful to Tolman's 1938 paper, it is oversimplified in that no combinations of IV's are considered. The following example describes an intervening variable as a summary of more than one IV.] - the IV no. of hours of food deprivation is an observable indicator of the animals motivational state and is directly related to performance measures such as speed of maze running - the IV incentive (see Hergenhahn & Olson pp. 303-304 on "reinforcement expectancy") also reflects motivation, is also directly related to performance, and is also observable - by measuring, say, the difference in a rats performance when reinforced with sunflower seeds vs. with the preferred reinforcer, bran mash (yum!) - the functional relation, then, looks like DV = f(IV1, IV2): IV's DV deprivation (IV1) -> performance measure (e.g., maze running speed) incentive (IV2) -> performance measure (e.g., maze running speed) and the theorist has no simple way of talking about them both: the same amount of deprivation may produce slower running speeds with sunflower seeds as reinforcement than with bran mash as reinforcement; or, incentive differences may disappear if deprivation is very great. How can any relationship be stated between just one of these and learning? What the theorist needs to get at is the conjunction of the two IVs - how much the animal wants the food, or how hungry it is. - the two IVs can be combined into the intervening variable hunger, which in turn affects running speed: deprivation (IV1) -> hunger (intervening variable) -> performance measure (DV: speed) incentive (IV2) -> - so "hunger" = deprivation plus incentive [more accurately, hunger = f(deprivation, incentive): the function that produces hunger from deprivation and incentive might be addition, but it might be multiplication or something else!] Tolman referred to two major types of intervening variables, demands and cognitions. Hunger is an example of a demand, an animals motivation to get to some goal object. A cognition is what an animal acquires through learning - for example, an expectancy: some stimulus is a sign that a goal object can be got to by some means, i.e., the maze situation (or other stimulus situation) is a sign that running to the end will result in getting food. Cognitions may develop as follows: - hypotheses, though cognitive-sounding, are observable in an animals systematically shifting its attention to different aspects of a problem situation, instead of engaging in the random trial-and-error behavior that earlier behaviorists would expect to see - being rewarded does not strengthen some assumed S-R connection in the animal; instead it provides confirmation for the hypothesis being tried out. For Tolman, the notion of reinforcement is replaced by confirmation of hypotheses. - a hypothesis that is often enough confirmed becomes an expectancy - for instance, that a particular response performed in a particular stimulus situation will lead to another particular stimulus situation (one in which the animal obtains food!). This expectancy could be observed in the animals consistent quick running of the maze when hungry. - this expectancy could then be generalized into a means-end readiness, a belief that performing that type of response in that type of situation would lead to that type of goal object, so that a rat might be disposed to run some other maze expecting some kind of food at the end. This means-end readiness might be observable if the rat did perform well on new mazes, for example. The final formula for explaining behavior performance, then, says that performance is a function of an animals need or motivation, and the knowledge it has available to address that need with: Behavior = Demand (e.g., hunger) + Cognition (e.g., expectancies, cognitive maps) - this formula's ingredients are intervening variables which are tied explicitly to observable aspects of the behavior being examined. - the role of learning is to produce the cognitions - [though Tolmans theory is often referred to as sign-learning or S-S learning (in contrast to S-R learning), the traditional notion of association is not an accurate description of what he said was learned; his full term was sign-Gestalt expectation, meant to imply not just a stimulus that plays the part of a sign for another stimulus, but rather, an integrated whole consisting of the sign, the goal object, and the means by which the goal could be got to] Tolmans early theorizing (1932) emphasized the point-at-ability of his cognitive terms. In 1935 he used intervening variables to

represent those terms. By 1948 he accepted cognitions as real things in the brain, rather than mere descriptions of behavior. Shortly thereafter others offered a terminology in which the term intervening variable referred strictly to a rearrangement or summary of the observed independent variables; and the term hypothetical construct was used when an intervening variable was supposed to stand for a real characteristic of the animal rather than just a summary of the data itself (i.e., when it had surplus meaning). Tolmans 1948 variables would thus be called hypothetical constructs. But he rejected the distinction, since, he argued, the only way to come up with ideas for what the intervening variables should be in the first place was to imagine what the "real" entities must be.

Three research programs associated with Tolman insight learning: rats were trained to take three possible paths to a goal and developed a preference for the shortest, then the next shortest, then the longest. When block was placed in shortest path, rats returned to decision point and chose next shortest path. When block was placed so as to eliminate both shorter paths, rats returned to decision point and chose longest path without trying out second path at all. Demonstrates (a) that rats learned layout of maze, but more importantly (b) they could spontaneously choose the successful path without having to overcome any mechanically established habit, i.e., without trial and error: they can behave insighfully by grasping the structure of a problem, rather than incrementally improving their performance. Provides evidence in favor of the flexibility or docility of behavior (vs. the mechanistic rigidity of Thorndikes S-R habit learning) latent learning: three groups of rats ran a maze. Reinforced group's learning curve showed decreasing time to get through maze over several days of trials; unreinforced group showed little decrease in running time. Experimental group received reinforcement beginning on eleventh day and immediately showed decrease in running time so that they ran as fast on the twelfth day as did the reinforced group. Unless this group did same amount of learning in one day as the control group had done in twelve (!), this demonstrates that learning is not dependent on reinforcement; rather reinforcement has its effect on performance - it gives the rat a motivation to use what it has learned. Provides evidence for the purposive (vs. the mechanistic) conception of behavior. place learning: rats learned an elevated maze which required them to make a sequence of turns to get to the goal. Then in the test maze, rats were blocked from taking the first alley they used in the learning phase, and had to choose among several radiating paths to get to the goal. Instead of choosing the path most directionally similar to the learned path, or one pointing in the direction of their initial turn away from the goal in the learned path, the most frequently chosen path was one which led directly to the goal (which they could see); the next most frequent choice was a turn from the starting path in the perpendicular direction toward the goal. Demonstrates that rats learn the layout of a maze rather than a particular sequence of movements. Provides evidence for the molar (place learning) as opposed to the molecular (response learning) perspective, as well as for the development of cognitive maps.

You might also like