Information

Biological Plausibility of FORCE training

Biological Plausibility of FORCE training



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In "Supervised Learning in Spiking Neural Networks with FORCE Training" by Wilten Nicola and Claudia Clopath. The authors create a learning rule for learning non-linear dynamics from populations of spiking neurons. The learning rule only seems to depend on:

  1. an error signal
  2. the firing rates of each neurons
  3. an approximate inverse correlation matrix calculated from the firing rates of each neuron

Is it possible to calculate the inverse correlation matrix of using another population of neurons or is there some biological mechanism that could be used to explain this? Additionally, is this calculation sensitive to delay? Often, forgetting to take delay into account is the undoing of many learning rules and cognitive architectures.


From the paper:

Although FORCE trained networks have dynamics that are starting to resemble those of populations of neurons, at present all top-down procedures used to construct any functional spiking neural network need further work to become biologically plausible learning rules [Sussillo and Abbott, 2009, Boerlin et al., 2013, Eliasmith et al., 2012]. For example, FORCE trained networks require non-local information in the form of the correlation matrix $P(t)$. However, we should not dismiss the final weight matrices generated by these techniques as biologically implausible simply because the techniques are themselves biologically implausible. More work should be done in implementing either FORCE, NEF, or spike-based coding networks using a biologically plausible learning mechanism based on synaptic plasticity or homeostasis [Bi and Poo, 1998, Pfister and Gerstner, 2006, Clopath et al., 2010, Graupner and Brunel, 2012, Babadi and Abbott, 2016, Vogels et al., 2011]. This has been resolved for spike-based coding networks and linear dynamical systems for example [Bourdoukan and Deneve, 2015]

Basically, calculating the correlation matrix $P(t)$ requires every neuron to know what every other neuron is doing, so the FORCE algorithm isn't biologically plausible. I'm not sure what alternative there would be calculating the correlation matrix. Maybe there's some mathematical way to approximate it gradually over time?


Understanding the Dynamics of the Aging Process

Aging is associated with changes in dynamic biological, physiological, environmental, psychological, behavioral, and social processes. Some age-related changes are benign, such as graying hair. Others result in declines in function of the senses and activities of daily life and increased susceptibility to and frequency of disease, frailty, or disability. In fact, advancing age is the major risk factor for a number of chronic diseases in humans.

Studies from the basic biology of aging using laboratory animals — and now extended to human populations — have led to the emergence of theories to explain aging. While there is no single “key” to explain aging, these studies have demonstrated that the rate of aging can be slowed, suggesting that targeting aging will coincidentally slow the appearance and/or reduce the burden of numerous diseases and increase healthspan (the portion of life spent in good health).

To develop new interventions for the prevention, early detection, diagnosis, and treatment of aging-related diseases, disorders, and disabilities, we must first understand their causes and the factors that place people at increased risk for their initiation and progression. NIA-supported researchers are engaged in basic science at all levels of analysis, from molecular to social, to understand the processes of aging and the factors that determine who ages “well” and who is susceptible to age-related disease and disability. Research is also ongoing to identify the interactions among genetic, environmental, lifestyle, behavioral, and social factors and their influence on the initiation and progression of age-related diseases and degenerative conditions.


Historical Perspectives

PERSONALITY ASSESSMENT

Personality assessment has come to rival intelligence testing as a task performed by psychologists. However, while most psychologists would agree that an intelligence test is generally the best way to measure intelligence, no such consensus exists for personality evaluation. In long-term perspective, it would appear that two major philosophies and perhaps three assessment method’s have emerged. The two philosophies can be traced back to Allport’s (1937) distinction between nomothetic versus idiographic methodologies and Meehl’s (1954) distinction between clinical and statistical or actuarial prediction. In essence, some psychologists feel that personality assessments are best accomplished when they are highly individualized, while others have a preference for quantitative procedures based on group norms. The phrase “seer versus sign” has been used to epitomize this dispute. The three methods referred to are the interview, and projective and objective tests. Obviously, the first way psychologists and their predecessors found out about people was to talk to them, giving the interview historical precedence. But following a period when the use of the interview was eschewed by many psychologists, it has made a return. It would appear that the field is in a historical spiral, with various methods leaving and returning at different levels.

The interview began as a relatively unstructured conversation with the patient and perhaps an informant, with varying goals, including obtaining a history, assessing personality structure and dynamics, establishing a diagnosis, and many other matters. Numerous publications have been written about interviewing (e.g., Menninger, 1952 ), but in general they provided outlines and general guidelines as to what should be accomplished by the interview. However, model interviews were not provided. With or without this guidance, the interview was viewed by many as a subjective, unreliable procedure that could not be sufficiently validated. For example, the unreliability of psychiatric diagnosis based on studies of multiple interviewers had been well established ( Zubin, 1967 ). More recently, however, several structured psychiatric interviews have appeared in which the specific content, if not specific items, has been presented, and for which very adequate reliability has been established. There are by now several such interviews available including the Schedule for Affective Disorders and Schizophrenia (SADS) ( Spitzer & Endicott, 1977 ), the Renard Diagnostic Interview ( Helzer, Robins, Croughan, & Weiner, 1981 ), and the Structured Clinical Interview for DSM-III, DSM-III-R, or DSM-IV (SCID or SCID-R) ( Spitzer & Williams, 1983 ) (now updated for DSM-IV). These interviews have been established in conjunction with objective diagnostic criteria including DSM-III itself, the Research Diagnostic Criteria ( Spitzer, Endicott, & Robins, 1977 ), and the Feighner Criteria ( Feighner, et al., 1972 ). These new procedures have apparently ushered in a “comeback” of the interview, and many psychiatrists and psychologists now prefer to use these procedures rather than either the objective- or projective-type psychological test.

Those advocating use of structured interviews point to the fact that in psychiatry, at least, tests must ultimately be validated against judgments made by psychiatrists. These judgments are generally based on interviews and observation, since there really are no biological or other objective markers of most forms of psychopathology. If that is indeed the case, there seems little point in administering elaborate and often lengthy tests when one can just as well use the criterion measure itself, the interview, rather than the test. There is no way that a test can be more valid than an interview if an interview is the validating criterion. Structured interviews have made a major impact on the scientific literature in psychopathology, and it is rare to find a recently written research report in which the diagnoses were not established by one of them. It would appear that we have come full cycle regarding this matter, and until objective markers of various forms of psychopathology are discovered, we will be relying primarily on the structured interviews for our diagnostic assessments.

Interviews such as the SCID or the Diagnostic Interview Schedule (DIS) type are relatively lengthy and comprehensive, but there are now several briefer, more specific interview or interview-like procedures. Within psychiatry, perhaps the most well-known procedure is the Brief Psychiatric Rating Scale (BPRS) ( Overall & Gorham, 1962 ). The BPRS is a brief, structured, repeatable interview that has essentially become the standard instrument for assessment of change in patients, usually as a function of taking some form of psychotropic medication. In the specific area of depression, the Hamilton Depression Scale ( Hamilton, 1960 ) plays a similar role. There are also several widely used interviews for patients with dementia, which generally combine a brief mental-status examination and some form of functional assessment, with particular reference to activities of daily living. The most popular of these scales are the Mini-Mental Status Examination of Folstein, Folstein, and McHugh (1975) and the Dementia Scale of Blessed, Tomlinson, and Roth (1968) . Extensive validation studies have been conducted with these instruments, perhaps the most well-known study having to do with the correlation between scores on the Blessed, Tomlinson, and Roth scale used in patients while they are living and the senile plaque count determined on autopsy in patients with dementia. The obtained correlation of .7 quite impressively suggested that the scale was a valid one for detection of dementia. In addition to these interviews and rating scales, numerous methods have been developed by nurses and psychiatric aids for assessment of psychopathology based on direct observation of ward behavior ( Raskin, 1982 ). The most widely used of these rating scales are the Nurses’ Observation Scale for Inpatient Evaluation (NOSIE-30) ( Honigfeld & Klett, 1965 ) and the Ward Behavior Inventory ( Burdock, Hardesty, Hakerem, Zubin, & Beck, 1968 ). These scales assess such behaviors as cooperativeness, appearance, communication, aggressive episodes, and related behaviors, and are based on direct observation rather than reference to medical records or the report of others. Scales of this type supplement the interview with information concerning social competence and capacity to carry out functional activities of daily living.

Again taking a long-term historical view, it is our impression that after many years of neglect by the field, the interview has made a successful return to the arena of psychological assessment but interviews now used are quite different from the loosely organized, “freewheeling,” conversation-like interviews of the past ( Hersen & Van Hassett, 1998 ). First, their organization tends to be structured, and the interviewer is required to obtain certain items of information. It is generally felt that formulation of specifically-worded questions is counterproductive rather, the interviewer, who should be an experienced clinician trained in the use of the procedure, should be able to formulate questions that will elicit the required information. Second, the interview procedure must meet psychometric standards of validity and reliability. Finally, while structured interviews tend to be atheoretical in orientation, they are based on contemporary scientific knowledge of psychopathology. Thus, for example, the information needed to establish a differential diagnosis within the general classification of mood disorders is derived from the scientific literature on depression and related mood disorders.

The rise of the interview appears to have occurred in parallel with the decline of projective techniques . Those of us in a chronological category that may be roughly described as middle-age may recall that our graduate training in clinical psychology probably included extensive course work and practicum experience involving the various projective techniques. Most clinical psychologists would probably agree that even though projective techniques are still used to some extent, the atmosphere of ferment and excitement concerning these procedures that existed during the 1940s and 1950s no longer seems to exist. Even though the Rorschach technique and Thematic Apperception Test (TAT) were the major procedures used during that era, a variety of other tests emerged quite rapidly: the projective use of human-figure drawings ( Machover, 1949 ), the Szondi Test ( Szondi, 1952 ), the Make-A-Picture-Story (MAPS) Test ( Shneidman, 1952 ), the Four-Picture Test ( VanLennep, 1951 ), the Sentence Completion Tests (e.g., Rohde, 1957 ), and the Holtzman Inkblot Test ( Holtzman, 1958 ). The exciting work of Murray and his collaborators reported on in Explorations in Personality ( Murray, 1938 ) had a major impact on the field and stimulated extensive utilization of the TAT. It would probably be fair to say that the sole survivor of this active movement is the Rorschach test. Many clinicians continue to use the Rorschach test, and the work of Exner and his collaborators has lent it increasing scientific respectability (see Chapter 17 in this volume).

There are undoubtedly many reasons for the decline in utilization of projective techniques, but in our view they can be summarized by the following points:

Increasing scientific sophistication created an atmosphere of skepticism concerning these instruments. Their validity and reliability were called into question by numerous studies (e.g., Swensen, 1957, 1968 Zubin, 1967 ), and a substantial segment of the professional community felt that the claims made for these procedures could not be substantiated.

Developments in alternative procedures, notably the MMPI and other objective tests, convinced many clinicians that the information previously gained from projective tests could be gained more efficiently and less expensively with objective methods. In particular, the voluminous Minnesota Multiphasic Personality Inventory (MMPI) research literature has demonstrated its usefulness in an extremely wide variety of clinical and research settings. When the MMPI and related objective techniques were pitted against projective techniques during the days of the “seer versus sign” controversy, it was generally demonstrated that sign was as good as or better than seer in most of the studies accomplished ( Meehl, 1954 ).

In general, the projective techniques are not atheoretical and, in fact, are generally viewed as being associated with one or another branch of psychoanalytic theory. While psychoanalysis remains a strong and vigorous movement within psychology, there are numerous alternative theoretical systems at large, notably behaviorally and biologically oriented systems. As implied in the section of this chapter covering behavioral assessment, behaviorally oriented psychologists pose theoretical objections to projective techniques and make little use of them in their practices. Similarly, projective techniques tend not to receive high levels of acceptance in biologically-oriented psychiatry departments. In effect, then, utilization of projective techniques declined for scientific, practical, and philosophical reasons. However, the Rorschach test in particular continues to be productively used, primarily by psychodynamically oriented clinicians.

The early history of objective personality tests has been traced by Cronbach (1949, 1960 ). The beginnings apparently go back to Sir Francis Galton, who devised personality questionnaires during the latter part of the 19th century. We will not repeat that history here, but rather will focus on those procedures that survived into the contemporary era. In our view, there have been three such major survivors: a series of tests developed by Guilford and collaborators ( Guilford & Zimmerman, 1949 ), a similar series developed by Cattell and collaborators ( Cattell, Eber, & Tatsuoka, 1970 ), and the MMPI. In general, but certainly not in all cases, the Guilford and Cattell procedures are used for individuals functioning within the normal range, while the MMPI is more widely used in clinical populations. Thus, for example, Cattell’s 16PF test may be used to screen job applicants, while the MMPI may be more typically used in psychiatric health-care facilities. Furthermore, the Guilford and Cattell tests are based on factor analysis and are trait-oriented, while the MMPI in its standard form does not make use of factor analytically derived scales and is more oriented toward psychiatric classification. Thus, the Guilford and Cattell scales contain measures of such traits as dominance or sociability, while most of the MMPI scales are named after psychiatric classifications such as paranoia or hypochondriasis.

Currently, most psychologists use one or more of these objective tests rather than interviews or projective tests in screening situations. For example, many thousands of patients admitted to psychiatric facilities operated by the Veterans Administration take the MMPI shortly after admission, while applicants for prison-guard jobs in the state of Pennsylvania take the Cattell 16PF. However, the MMPI in particular is commonly used as more than a screening instrument. It is frequently used as a part of an extensive diagnostic evaluation, as a method of evaluating treatment, and in numerous research applications. There is little question that it is the most widely used and extensively studied procedure in the objective personality-test area. Even though the 566 true-or-false items have remained the same since the initial development of the instrument, the test’s applications in clinical interpretation have evolved dramatically over the years. We have gone from perhaps an overly naive dependence on single-scale evaluations and overly literal interpretation of the names of the scales (many of which are archaic psychiatric terms) to a sophisticated configural interpretation of profiles, much of which is based on empirical research ( Gilber-stadt & Duker, 1965 Marks, Seeman, & Haller, 1974 ). Correspondingly, the methods of administering, scoring, and interpreting the MMPI have kept pace with technological and scientific advances in the behavioral sciences. From beginning with sorting cards into piles, hand scoring, and subjective interpretation, the MMPI has gone to computerized administration and scoring, interpretation based, at least to some extent, on empirical findings, and computerized interpretation. As is well known, there are several companies that will provide computerized scoring and interpretations of the MMPI.

Since the appearance of the earlier editions of this handbook, there have been two major developments in the field of objective personality-assessment. First, Millon has produced a new series of tests called the Millon Clinical Multiaxial Inventory (Versions I and II), the Millon Adolescent Personality Inventory, and the Millon Behavioral Health Inventory ( Millon, 1982 1985 ). Second, the MMPI has been completely revised and restandardized, and is now known as the MMPI-2. Since the appearance of the second edition of this handbook, use of the MMPI-2 has been widely adopted. Chapter 16 in this volume describes these new developments in detail.

Even though we should anticipate continued spiraling of trends in personality assessment, it would appear that we have passed an era of projective techniques and are now living in a time of objective assessment, with an increasing interest in the structured interview. There also appears to be increasing concern with the scientific status of our assessment procedures. In recent years, there has been particular concern about reliability of diagnosis, especially since distressing findings appeared in the literature suggesting that psychiatric diagnoses were being made quite unreliably ( Zubin, 1967 ). The issue of validity in personality assessment remains a difficult one for a number of reasons. First, if by personality assessment we mean prediction or classification of some psychiatric diagnostic category, we have the problem of there being essentially no known objective markers for the major forms of psychopathology. Therefore, we are left essentially with psychiatrists’ judgments. The DSM system has greatly improved this situation by providing objective criteria for the various mental disorders, but the capacity of such instruments as the MMPI or Rorschach test to predict DSM diagnoses has not yet been evaluated and remains a research question for the future. Some scholars, however, even question the usefulness of taking that research course rather than developing increasingly reliable and valid structured interviews ( Zubin, 1984 ). Similarly, there have been many reports of the failure of objective tests to predict such matters as success in an occupation or trustworthiness with regard to handling a weapon. For example, objective tests are no longer used to screen astronauts, since they were not successful in predicting who would be successful or unsuccessful ( Cordes, 1983 ). There does, in fact, appear to be a movement within the general public and the profession toward discontinuation of use of personality-assessment procedures for decision-making in employment situations. We would note as another possibly significant trend, a movement toward direct observation of behavior in the form of behavioral assessment, as in the case of the development of the Autism Diagnostic Observation Schedule (ADOS) ( Lord et al., 1989 ). The Zeitgeist definitely is in opposition to procedures in which the intent is disguised. Burdock and Zubin (1985) , for example, argue that, “nothing has as yet replaced behavior for evaluation of mental patients.”


Funding

I would like to thank B. Dan for fruitful discussion about the manuscript, T. Dɺngelo M. Dufief, E. Toussaint, E. Hortmanns, and M. Petieau, for expert technical assistance. This work was funded by the Belgian Federal Science Policy Office, the European Space Agency (AO-2004,118), the Belgian National Fund for Scientific Research (FNRS), the research funds of the Université Libre de Bruxelles and of the Université de Mons (Belgium), the FEDER support (BIOFACT), the MINDWALKER project (FP7��) supported by the European Commission, the Fonds G. Leibu and the NeuroAtt BIOWIN project supported Walloon Country.


Psychology and policing: a dynamic partnership

In 1982, a small group of psychologists working in police agencies found an APA home in Div. 18 (Psychologists in Public Service). At that time, law enforcement resisted psychology. So, it was extremely gratifying when, 15 years later, police chiefs met with APA leadership seeking input on managing pressing problems that affect the quality of American policing.

Indeed, psychology has made significant inroads into improving functioning of the tradition-clad occupations that are responsible for public safety and law enforcement throughout the country. The work of the five psychologists profiled in this issue represents the breadth of services that are available to police and public safety organizations. A survey by VerHelst, Delprino and O'Regan (2002) confirms that police use of psychological services continues to grow. They support the findings of a national survey (Scrivner, 1994), which showed the impact that psychology has made on policing.

Transforming events

Police departments' acceptance of psychology reflects a major cultural shift in policing and allows other transforming events to occur. For example, psychology's resources could be applied to addressing significant national policy issues, such as the interactions between police and citizens in their communities. Consequently, the growing number of psychologists working with law enforcement argues for psychology to have an even greater influence on public policy and the delivery of police services in this country. The work of the APA Committee on Urban Initiatives (CUI) is one step in this direction. In 1998, CUI incorporated community policing into the committee's portfolio to explore the potential for this innovative police reform to improve relationships between the police and urban citizens.

Community policing, cited as one factor responsible for the dramatic decrease in crime, is based on establishing effective problem-solving partnerships with the community to prevent crime and disorder while improving the quality of life. As such, community policing promotes behavioral change. Therefore, this major criminal justice initiative has a psychological component.

CUI initiated its work by hosting a series of roundtable discussions with police chiefs in conjunction with APA's Annual Conventions. For three consecutive years, CUI met with local police chiefs and the psychologists who worked with them to determine where we could forge stronger alliances.

The dialogue covered a wide range of topics that go beyond delivery of traditional mental health services. Some examples include: identifying the types of assistance needed to end racial profiling, intervening in police brutality, strengthening police integrity and developing greater understanding of police officer fear. Other topics involved examining alternatives to arresting the homeless, responding to hate crimes, and mediation and anger-management training for front-line officers.

The roundtables also addressed psychology's research expertise. These discussions generated research ideas for studying the impact on police officers of observing violence, how violence goes home with officers to become domestic violence, and using the research literature on self-fulfilling prophecies and changing stereotypes to examine ethnic profiling. The CUI initiative came full circle at APA's 2001 Annual Convention when the San Francisco police chief and the sheriff of Los Angeles County participated in a workshop on racial profiling. They discussed their efforts to use community policing to prevent racial profiling.

Maintaining the momentum

These initiatives show steady growth in the partnership between police and psychology. However, we still have more to do to ensure that talk becomes action and influences policies on public safety. Psychology, with a knowledge base that is relevant to so many social issues and the tradition of seeking research-based solutions, is uniquely positioned to maintain this momentum and help to create better lives for people.

The events of Sept. 11 have broadened psychology's role in helping first responders and victims of this tragedy. However, they also create new roles for psychology as police increase their participation in homeland security. Psychology can be an important partner in helping police balance the delivery of law enforcement services to all citizens while facing the challenge of maintaining readiness to respond to public safety alerts.


Interactions

What researchers do know is that the interaction between heredity and environment is often the most important factor of all. Kevin Davies of PBS's Nova described one fascinating example of this phenomenon.

Perfect pitch is the ability to detect the pitch of a musical tone without any reference. Researchers have found that this ability tends to run in families and believe that it might be tied to a single gene. However, they've also discovered that possessing the gene alone is not enough to develop this ability. Instead, musical training during early childhood is necessary to allow this inherited ability to manifest itself.  

Height is another example of a trait that is influenced by nature and nurture interaction. A child might come from a family where everyone is tall, and he may have inherited these genes for height. However, if he grows up in a deprived environment where he does not receive proper nourishment, he might never attain the height he might have had he grown up in a healthier environment.


1. Introduction

Machine learning and neuroscience speak different languages today. Brain science has discovered a dazzling array of brain areas (Solari and Stoner, 2011), cell types, molecules, cellular states, and mechanisms for computation and information storage. Machine learning, in contrast, has largely focused on instantiations of a single principle: function optimization. It has found that simple optimization objectives, like minimizing classification error, can lead to the formation of rich internal representations and powerful algorithmic capabilities in multilayer and recurrent networks (LeCun et al., 2015 Schmidhuber, 2015). Here we seek to connect these perspectives.

The artificial neural networks now prominent in machine learning were, of course, originally inspired by neuroscience (McCulloch and Pitts, 1943). While neuroscience has continued to play a role (Cox and Dean, 2014), many of the major developments were guided by insights into the mathematics of efficient optimization, rather than neuroscientific findings (Sutskever and Martens, 2013). The field has advanced from simple linear systems (Minsky and Papert, 1972), to nonlinear networks (Haykin, 1994), to deep and recurrent networks (LeCun et al., 2015 Schmidhuber, 2015). Backpropagation of error (Werbos, 1974, 1982 Rumelhart et al., 1986) enabled neural networks to be trained efficiently, by providing an efficient means to compute the gradient with respect to the weights of a multi-layer network. Methods of training have improved to include momentum terms, better weight initializations, conjugate gradients and so forth, evolving to the current breed of networks optimized using batch-wise stochastic gradient descent. These developments have little obvious connection to neuroscience.

We will argue here, however, that neuroscience and machine learning are again ripe for convergence. Three aspects of machine learning are particularly important in the context of this paper. First, machine learning has focused on the optimization of cost functions (Figure 1A).

Figure 1. Putative differences between conventional and brain-like neural network designs. (A) In conventional deep learning, supervised training is based on externally-supplied, labeled data. (B) In the brain, supervised training of networks can still occur via gradient descent on an error signal, but this error signal must arise from internally generated cost functions. These cost functions are themselves computed by neural modules specified by both genetics and learning. Internally generated cost functions create heuristics that are used to bootstrap more complex learning. For example, an area which recognizes faces might first be trained to detect faces using simple heuristics, like the presence of two dots above a line, and then further trained to discriminate salient facial expressions using representations arising from unsupervised learning and error signals from other brain areas related to social reward processing. (C) Internally generated cost functions and error-driven training of cortical deep networks form part of a larger architecture containing several specialized systems. Although the trainable cortical areas are schematized as feedforward neural networks here, LSTMs or other types of recurrent networks may be a more accurate analogy, and many neuronal and network properties such as spiking, dendritic computation, neuromodulation, adaptation and homeostatic plasticity, timing-dependent plasticity, direct electrical connections, transient synaptic dynamics, excitatory/inhibitory balance, spontaneous oscillatory activity, axonal conduction delays (Izhikevich, 2006) and others, will influence what and how such networks learn.

Second, recent work in machine learning has started to introduce complex cost functions, those that are not uniform across layers and time, and those that arise from interactions between different parts of a network. For example, introducing the objective of temporal coherence for lower layers (non-uniform cost function over space) improves feature learning (Sermanet and Kavukcuoglu, 2013), cost function schedules (non-uniform cost function over time) improve 1 generalization (Saxe et al., 2013 Goodfellow et al., 2014b Gül๾hre and Bengio, 2016) and adversarial networks𠅊n example of a cost function arising from internal interactions𠅊llow gradient-based training of generative models (Goodfellow et al., 2014a) 2 . Networks that are easier to train are being used to provide “hints” to help bootstrap the training of more powerful networks (Romero et al., 2014).

Third, machine learning has also begun to diversify the architectures that are subject to optimization. It has introduced simple memory cells with multiple persistent states (Hochreiter and Schmidhuber, 1997 Chung et al., 2014), more complex elementary units such as �psules” and other structures (Delalleau and Bengio, 2011 Hinton et al., 2011 Tang et al., 2012 Livni et al., 2013), content addressable (Graves et al., 2014 Weston et al., 2014) and location addressable memories (Graves et al., 2014), as well as pointers (Kurach et al., 2015) and hard-coded arithmetic operations (Neelakantan et al., 2015).

These three ideas have, so far, not received much attention in neuroscience. We thus formulate these ideas as three hypotheses about the brain, examine evidence for them, and sketch how experiments could test them. But first, let us state the hypotheses more precisely.

1.1. Hypothesis 1 – The Brain Optimizes Cost Functions

The central hypothesis for linking the two fields is that biological systems, like many machine-learning systems, are able to optimize cost functions. The idea of cost functions means that neurons in a brain area can somehow change their properties, e.g., the properties of their synapses, so that they get better at doing whatever the cost function defines as their role. Human behavior sometimes approaches optimality in a domain, e.g., during movement (Körding, 2007), which suggests that the brain may have learned optimal strategies. Subjects minimize energy consumption of their movement system (Taylor and Faisal, 2011), and minimize risk and damage to their body, while maximizing financial and movement gains. Computationally, we now know that optimization of trajectories gives rise to elegant solutions for very complex motor tasks (Harris and Wolpert, 1998 Todorov and Jordan, 2002 Mordatch et al., 2012). We suggest that cost function optimization occurs much more generally in shaping the internal representations and processes used by the brain. Importantly, we also suggest that this requires the brain to have mechanisms for efficient credit assignment in multilayer and recurrent networks.

1.2. Hypothesis 2 – Cost Functions Are Diverse across Areas and Change over Development

A second realization is that cost functions need not be global. Neurons in different brain areas may optimize different things, e.g., the mean squared error of movements, surprise in a visual stimulus, or the allocation of attention. Importantly, such a cost function could be locally generated. For example, neurons could locally evaluate the quality of their statistical model of their inputs (Figure 1B). Alternatively, cost functions for one area could be generated by another area. Moreover, cost functions may change over time, e.g., guiding young humans to understanding simple visual contrasts early on, and faces a bit later 3 . This could allow the developing brain to bootstrap more complex knowledge based on simpler knowledge. Cost functions in the brain are likely to be complex and to be arranged to vary across areas and over development.

1.3. Hypothesis 3 – Specialized Systems Allow Efficient Solution of Key Computational Problems

A third realization is that structure matters. The patterns of information flow seem fundamentally different across brain areas, suggesting that they solve distinct computational problems. Some brain areas are highly recurrent, perhaps making them predestined for short-term memory storage (Wang, 2012). Some areas contain cell types that can switch between qualitatively different states of activation, such as a persistent firing mode vs. a transient firing mode, in response to particular neurotransmitters (Hasselmo, 2006). Other areas, like the thalamus appear to have the information from other areas flowing through them, perhaps allowing them to determine information routing (Sherman, 2005). Areas like the basal ganglia are involved in reinforcement learning and gating of discrete decisions (Doya, 1999 Sejnowski and Poizner, 2014). As every programmer knows, specialized algorithms matter for efficient solutions to computational problems, and the brain is likely to make good use of such specialization (Figure 1C).

These ideas are inspired by recent advances in machine learning, but we also propose that the brain has major differences from any of today's machine learning techniques. In particular, the world gives us a relatively limited amount of information that we could use for supervised learning (Fodor and Crowther, 2002). There is a huge amount of information available for unsupervised learning, but there is no reason to assume that a generic unsupervised algorithm, no matter how powerful, would learn the precise things that humans need to know, in the order that they need to know it. The evolutionary challenge of making unsupervised learning solve the “right” problems is, therefore, to find a sequence of cost functions that will deterministically build circuits and behaviors according to prescribed developmental stages, so that in the end a relatively small amount of information suffices to produce the right behavior. For example, a developing duck imprints (Tinbergen, 1965) a template of its parent, and then uses that template to generate goal-targets that help it develop other skills like foraging.

Generalizing from this and from other studies (Minsky, 1977 Ullman et al., 2012), we propose that many of the brain's cost functions arise from such an internal bootstrapping process. Indeed, we propose that biological development and reinforcement learning can, in effect, program the emergence of a sequence of cost functions that precisely anticipates the future needs faced by the brain's internal subsystems, as well as by the organism as a whole. This type of developmentally programmed bootstrapping generates an internal infrastructure of cost functions which is diverse and complex, while simplifying the learning problems faced by the brain's internal processes. Beyond simple tasks like familial imprinting, this type of bootstrapping could extend to higher cognition, e.g., internally generated cost functions could train a developing brain to properly access its memory or to organize its actions in ways that will prove to be useful later on. The potential bootstrapping mechanisms that we will consider operate in the context of unsupervised and reinforcement learning, and go well beyond the types of curriculum learning ideas used in today's machine learning (Bengio et al., 2009).

In the rest of this paper, we will elaborate on these hypotheses. First, we will argue that both local and multi-layer optimization is, perhaps surprisingly, compatible with what we know about the brain. Second, we will argue that cost functions differ across brain areas and change over time and describe how cost functions interacting in an orchestrated way could allow bootstrapping of complex function. Third, we will list a broad set of specialized problems that need to be solved by neural computation, and the brain areas that have structure that seems to be matched to a particular computational problem. We then discuss some implications of the above hypotheses for research approaches in neuroscience and machine learning, and sketch a set of experiments to test these hypotheses. Finally, we discuss this architecture from the perspective of evolution.


Child and Adolescent Services Multicultural Clinical Training Program

The UCSF Child and Adolescent Services Multicultural Clinical Training Program (MCTP) at Zuckerberg San Francisco General Hospital and Trauma Center offers an American Psychological Association (APA)-accredited, one-year child clinical psychology internship based on the scholar-practitioner model. Thus, our program is grounded in serving the needs of the local community with a commitment to research that is taught and valued, particularly, though not exclusively, in the service of clinical practice.

We hold an ideal of professional excellence grounded in theory and empirical research, informed by experiential knowledge, and motivated by a commitment to social justice and ethical conduct. At Zuckerberg San Francisco General, we encourage students to become not just consumers of knowledge, but also agents of change who contribute to the advancement of individuals, communities, organizations, and society. The MCTP provides specialized training and leadership in multicultural psychology and works to break down barriers that families often encounter in their attempts to access culturally appropriate, high-quality, evidence-based care.

The internship program is embedded in the Division of Infant, Child and Adolescent Psychiatry in UCSF’s Department of Psychiatry and Behavioral Sciences. Zuckerberg San Francisco General is a Level 1 Trauma Center and public service hospital committed to serving low-income and culturally diverse populations and those from marginalized communities. Clinical services are linked to the San Francisco Department of Public Health's Community Behavioral Health System. The MCTP was accredited by the American Psychological Association (APA) in 2007 and reaccredited in 2013. The APA Commission on Accreditation completed a site visit in August 2019 and our accreditation is currently under review. The MCTP continues to have full APA accreditation.

The MCTP is designed to train clinical psychologists who are committed to serving children, youth, and families from low-income and diverse ethnic and cultural groups. Over the last several years, 89% of our graduates have obtained positions in academic health centers or hospital centers providing care to underserved children and families.

Training is intended to provide experience across the entire developmental spectrum of 0-24 years of age and provides specialized training in:


Child and Adolescent Services Multicultural Clinical Training Program

The UCSF Child and Adolescent Services Multicultural Clinical Training Program (MCTP) at Zuckerberg San Francisco General Hospital and Trauma Center offers an American Psychological Association (APA)-accredited, one-year child clinical psychology internship based on the scholar-practitioner model. Thus, our program is grounded in serving the needs of the local community with a commitment to research that is taught and valued, particularly, though not exclusively, in the service of clinical practice.

We hold an ideal of professional excellence grounded in theory and empirical research, informed by experiential knowledge, and motivated by a commitment to social justice and ethical conduct. At Zuckerberg San Francisco General, we encourage students to become not just consumers of knowledge, but also agents of change who contribute to the advancement of individuals, communities, organizations, and society. The MCTP provides specialized training and leadership in multicultural psychology and works to break down barriers that families often encounter in their attempts to access culturally appropriate, high-quality, evidence-based care.

The internship program is embedded in the Division of Infant, Child and Adolescent Psychiatry in UCSF’s Department of Psychiatry and Behavioral Sciences. Zuckerberg San Francisco General is a Level 1 Trauma Center and public service hospital committed to serving low-income and culturally diverse populations and those from marginalized communities. Clinical services are linked to the San Francisco Department of Public Health's Community Behavioral Health System. The MCTP was accredited by the American Psychological Association (APA) in 2007 and reaccredited in 2013. The APA Commission on Accreditation completed a site visit in August 2019 and our accreditation is currently under review. The MCTP continues to have full APA accreditation.

The MCTP is designed to train clinical psychologists who are committed to serving children, youth, and families from low-income and diverse ethnic and cultural groups. Over the last several years, 89% of our graduates have obtained positions in academic health centers or hospital centers providing care to underserved children and families.

Training is intended to provide experience across the entire developmental spectrum of 0-24 years of age and provides specialized training in:


Funding

I would like to thank B. Dan for fruitful discussion about the manuscript, T. Dɺngelo M. Dufief, E. Toussaint, E. Hortmanns, and M. Petieau, for expert technical assistance. This work was funded by the Belgian Federal Science Policy Office, the European Space Agency (AO-2004,118), the Belgian National Fund for Scientific Research (FNRS), the research funds of the Université Libre de Bruxelles and of the Université de Mons (Belgium), the FEDER support (BIOFACT), the MINDWALKER project (FP7��) supported by the European Commission, the Fonds G. Leibu and the NeuroAtt BIOWIN project supported Walloon Country.


Understanding the Dynamics of the Aging Process

Aging is associated with changes in dynamic biological, physiological, environmental, psychological, behavioral, and social processes. Some age-related changes are benign, such as graying hair. Others result in declines in function of the senses and activities of daily life and increased susceptibility to and frequency of disease, frailty, or disability. In fact, advancing age is the major risk factor for a number of chronic diseases in humans.

Studies from the basic biology of aging using laboratory animals — and now extended to human populations — have led to the emergence of theories to explain aging. While there is no single “key” to explain aging, these studies have demonstrated that the rate of aging can be slowed, suggesting that targeting aging will coincidentally slow the appearance and/or reduce the burden of numerous diseases and increase healthspan (the portion of life spent in good health).

To develop new interventions for the prevention, early detection, diagnosis, and treatment of aging-related diseases, disorders, and disabilities, we must first understand their causes and the factors that place people at increased risk for their initiation and progression. NIA-supported researchers are engaged in basic science at all levels of analysis, from molecular to social, to understand the processes of aging and the factors that determine who ages “well” and who is susceptible to age-related disease and disability. Research is also ongoing to identify the interactions among genetic, environmental, lifestyle, behavioral, and social factors and their influence on the initiation and progression of age-related diseases and degenerative conditions.


1. Introduction

Machine learning and neuroscience speak different languages today. Brain science has discovered a dazzling array of brain areas (Solari and Stoner, 2011), cell types, molecules, cellular states, and mechanisms for computation and information storage. Machine learning, in contrast, has largely focused on instantiations of a single principle: function optimization. It has found that simple optimization objectives, like minimizing classification error, can lead to the formation of rich internal representations and powerful algorithmic capabilities in multilayer and recurrent networks (LeCun et al., 2015 Schmidhuber, 2015). Here we seek to connect these perspectives.

The artificial neural networks now prominent in machine learning were, of course, originally inspired by neuroscience (McCulloch and Pitts, 1943). While neuroscience has continued to play a role (Cox and Dean, 2014), many of the major developments were guided by insights into the mathematics of efficient optimization, rather than neuroscientific findings (Sutskever and Martens, 2013). The field has advanced from simple linear systems (Minsky and Papert, 1972), to nonlinear networks (Haykin, 1994), to deep and recurrent networks (LeCun et al., 2015 Schmidhuber, 2015). Backpropagation of error (Werbos, 1974, 1982 Rumelhart et al., 1986) enabled neural networks to be trained efficiently, by providing an efficient means to compute the gradient with respect to the weights of a multi-layer network. Methods of training have improved to include momentum terms, better weight initializations, conjugate gradients and so forth, evolving to the current breed of networks optimized using batch-wise stochastic gradient descent. These developments have little obvious connection to neuroscience.

We will argue here, however, that neuroscience and machine learning are again ripe for convergence. Three aspects of machine learning are particularly important in the context of this paper. First, machine learning has focused on the optimization of cost functions (Figure 1A).

Figure 1. Putative differences between conventional and brain-like neural network designs. (A) In conventional deep learning, supervised training is based on externally-supplied, labeled data. (B) In the brain, supervised training of networks can still occur via gradient descent on an error signal, but this error signal must arise from internally generated cost functions. These cost functions are themselves computed by neural modules specified by both genetics and learning. Internally generated cost functions create heuristics that are used to bootstrap more complex learning. For example, an area which recognizes faces might first be trained to detect faces using simple heuristics, like the presence of two dots above a line, and then further trained to discriminate salient facial expressions using representations arising from unsupervised learning and error signals from other brain areas related to social reward processing. (C) Internally generated cost functions and error-driven training of cortical deep networks form part of a larger architecture containing several specialized systems. Although the trainable cortical areas are schematized as feedforward neural networks here, LSTMs or other types of recurrent networks may be a more accurate analogy, and many neuronal and network properties such as spiking, dendritic computation, neuromodulation, adaptation and homeostatic plasticity, timing-dependent plasticity, direct electrical connections, transient synaptic dynamics, excitatory/inhibitory balance, spontaneous oscillatory activity, axonal conduction delays (Izhikevich, 2006) and others, will influence what and how such networks learn.

Second, recent work in machine learning has started to introduce complex cost functions, those that are not uniform across layers and time, and those that arise from interactions between different parts of a network. For example, introducing the objective of temporal coherence for lower layers (non-uniform cost function over space) improves feature learning (Sermanet and Kavukcuoglu, 2013), cost function schedules (non-uniform cost function over time) improve 1 generalization (Saxe et al., 2013 Goodfellow et al., 2014b Gül๾hre and Bengio, 2016) and adversarial networks𠅊n example of a cost function arising from internal interactions𠅊llow gradient-based training of generative models (Goodfellow et al., 2014a) 2 . Networks that are easier to train are being used to provide “hints” to help bootstrap the training of more powerful networks (Romero et al., 2014).

Third, machine learning has also begun to diversify the architectures that are subject to optimization. It has introduced simple memory cells with multiple persistent states (Hochreiter and Schmidhuber, 1997 Chung et al., 2014), more complex elementary units such as �psules” and other structures (Delalleau and Bengio, 2011 Hinton et al., 2011 Tang et al., 2012 Livni et al., 2013), content addressable (Graves et al., 2014 Weston et al., 2014) and location addressable memories (Graves et al., 2014), as well as pointers (Kurach et al., 2015) and hard-coded arithmetic operations (Neelakantan et al., 2015).

These three ideas have, so far, not received much attention in neuroscience. We thus formulate these ideas as three hypotheses about the brain, examine evidence for them, and sketch how experiments could test them. But first, let us state the hypotheses more precisely.

1.1. Hypothesis 1 – The Brain Optimizes Cost Functions

The central hypothesis for linking the two fields is that biological systems, like many machine-learning systems, are able to optimize cost functions. The idea of cost functions means that neurons in a brain area can somehow change their properties, e.g., the properties of their synapses, so that they get better at doing whatever the cost function defines as their role. Human behavior sometimes approaches optimality in a domain, e.g., during movement (Körding, 2007), which suggests that the brain may have learned optimal strategies. Subjects minimize energy consumption of their movement system (Taylor and Faisal, 2011), and minimize risk and damage to their body, while maximizing financial and movement gains. Computationally, we now know that optimization of trajectories gives rise to elegant solutions for very complex motor tasks (Harris and Wolpert, 1998 Todorov and Jordan, 2002 Mordatch et al., 2012). We suggest that cost function optimization occurs much more generally in shaping the internal representations and processes used by the brain. Importantly, we also suggest that this requires the brain to have mechanisms for efficient credit assignment in multilayer and recurrent networks.

1.2. Hypothesis 2 – Cost Functions Are Diverse across Areas and Change over Development

A second realization is that cost functions need not be global. Neurons in different brain areas may optimize different things, e.g., the mean squared error of movements, surprise in a visual stimulus, or the allocation of attention. Importantly, such a cost function could be locally generated. For example, neurons could locally evaluate the quality of their statistical model of their inputs (Figure 1B). Alternatively, cost functions for one area could be generated by another area. Moreover, cost functions may change over time, e.g., guiding young humans to understanding simple visual contrasts early on, and faces a bit later 3 . This could allow the developing brain to bootstrap more complex knowledge based on simpler knowledge. Cost functions in the brain are likely to be complex and to be arranged to vary across areas and over development.

1.3. Hypothesis 3 – Specialized Systems Allow Efficient Solution of Key Computational Problems

A third realization is that structure matters. The patterns of information flow seem fundamentally different across brain areas, suggesting that they solve distinct computational problems. Some brain areas are highly recurrent, perhaps making them predestined for short-term memory storage (Wang, 2012). Some areas contain cell types that can switch between qualitatively different states of activation, such as a persistent firing mode vs. a transient firing mode, in response to particular neurotransmitters (Hasselmo, 2006). Other areas, like the thalamus appear to have the information from other areas flowing through them, perhaps allowing them to determine information routing (Sherman, 2005). Areas like the basal ganglia are involved in reinforcement learning and gating of discrete decisions (Doya, 1999 Sejnowski and Poizner, 2014). As every programmer knows, specialized algorithms matter for efficient solutions to computational problems, and the brain is likely to make good use of such specialization (Figure 1C).

These ideas are inspired by recent advances in machine learning, but we also propose that the brain has major differences from any of today's machine learning techniques. In particular, the world gives us a relatively limited amount of information that we could use for supervised learning (Fodor and Crowther, 2002). There is a huge amount of information available for unsupervised learning, but there is no reason to assume that a generic unsupervised algorithm, no matter how powerful, would learn the precise things that humans need to know, in the order that they need to know it. The evolutionary challenge of making unsupervised learning solve the “right” problems is, therefore, to find a sequence of cost functions that will deterministically build circuits and behaviors according to prescribed developmental stages, so that in the end a relatively small amount of information suffices to produce the right behavior. For example, a developing duck imprints (Tinbergen, 1965) a template of its parent, and then uses that template to generate goal-targets that help it develop other skills like foraging.

Generalizing from this and from other studies (Minsky, 1977 Ullman et al., 2012), we propose that many of the brain's cost functions arise from such an internal bootstrapping process. Indeed, we propose that biological development and reinforcement learning can, in effect, program the emergence of a sequence of cost functions that precisely anticipates the future needs faced by the brain's internal subsystems, as well as by the organism as a whole. This type of developmentally programmed bootstrapping generates an internal infrastructure of cost functions which is diverse and complex, while simplifying the learning problems faced by the brain's internal processes. Beyond simple tasks like familial imprinting, this type of bootstrapping could extend to higher cognition, e.g., internally generated cost functions could train a developing brain to properly access its memory or to organize its actions in ways that will prove to be useful later on. The potential bootstrapping mechanisms that we will consider operate in the context of unsupervised and reinforcement learning, and go well beyond the types of curriculum learning ideas used in today's machine learning (Bengio et al., 2009).

In the rest of this paper, we will elaborate on these hypotheses. First, we will argue that both local and multi-layer optimization is, perhaps surprisingly, compatible with what we know about the brain. Second, we will argue that cost functions differ across brain areas and change over time and describe how cost functions interacting in an orchestrated way could allow bootstrapping of complex function. Third, we will list a broad set of specialized problems that need to be solved by neural computation, and the brain areas that have structure that seems to be matched to a particular computational problem. We then discuss some implications of the above hypotheses for research approaches in neuroscience and machine learning, and sketch a set of experiments to test these hypotheses. Finally, we discuss this architecture from the perspective of evolution.


Historical Perspectives

PERSONALITY ASSESSMENT

Personality assessment has come to rival intelligence testing as a task performed by psychologists. However, while most psychologists would agree that an intelligence test is generally the best way to measure intelligence, no such consensus exists for personality evaluation. In long-term perspective, it would appear that two major philosophies and perhaps three assessment method’s have emerged. The two philosophies can be traced back to Allport’s (1937) distinction between nomothetic versus idiographic methodologies and Meehl’s (1954) distinction between clinical and statistical or actuarial prediction. In essence, some psychologists feel that personality assessments are best accomplished when they are highly individualized, while others have a preference for quantitative procedures based on group norms. The phrase “seer versus sign” has been used to epitomize this dispute. The three methods referred to are the interview, and projective and objective tests. Obviously, the first way psychologists and their predecessors found out about people was to talk to them, giving the interview historical precedence. But following a period when the use of the interview was eschewed by many psychologists, it has made a return. It would appear that the field is in a historical spiral, with various methods leaving and returning at different levels.

The interview began as a relatively unstructured conversation with the patient and perhaps an informant, with varying goals, including obtaining a history, assessing personality structure and dynamics, establishing a diagnosis, and many other matters. Numerous publications have been written about interviewing (e.g., Menninger, 1952 ), but in general they provided outlines and general guidelines as to what should be accomplished by the interview. However, model interviews were not provided. With or without this guidance, the interview was viewed by many as a subjective, unreliable procedure that could not be sufficiently validated. For example, the unreliability of psychiatric diagnosis based on studies of multiple interviewers had been well established ( Zubin, 1967 ). More recently, however, several structured psychiatric interviews have appeared in which the specific content, if not specific items, has been presented, and for which very adequate reliability has been established. There are by now several such interviews available including the Schedule for Affective Disorders and Schizophrenia (SADS) ( Spitzer & Endicott, 1977 ), the Renard Diagnostic Interview ( Helzer, Robins, Croughan, & Weiner, 1981 ), and the Structured Clinical Interview for DSM-III, DSM-III-R, or DSM-IV (SCID or SCID-R) ( Spitzer & Williams, 1983 ) (now updated for DSM-IV). These interviews have been established in conjunction with objective diagnostic criteria including DSM-III itself, the Research Diagnostic Criteria ( Spitzer, Endicott, & Robins, 1977 ), and the Feighner Criteria ( Feighner, et al., 1972 ). These new procedures have apparently ushered in a “comeback” of the interview, and many psychiatrists and psychologists now prefer to use these procedures rather than either the objective- or projective-type psychological test.

Those advocating use of structured interviews point to the fact that in psychiatry, at least, tests must ultimately be validated against judgments made by psychiatrists. These judgments are generally based on interviews and observation, since there really are no biological or other objective markers of most forms of psychopathology. If that is indeed the case, there seems little point in administering elaborate and often lengthy tests when one can just as well use the criterion measure itself, the interview, rather than the test. There is no way that a test can be more valid than an interview if an interview is the validating criterion. Structured interviews have made a major impact on the scientific literature in psychopathology, and it is rare to find a recently written research report in which the diagnoses were not established by one of them. It would appear that we have come full cycle regarding this matter, and until objective markers of various forms of psychopathology are discovered, we will be relying primarily on the structured interviews for our diagnostic assessments.

Interviews such as the SCID or the Diagnostic Interview Schedule (DIS) type are relatively lengthy and comprehensive, but there are now several briefer, more specific interview or interview-like procedures. Within psychiatry, perhaps the most well-known procedure is the Brief Psychiatric Rating Scale (BPRS) ( Overall & Gorham, 1962 ). The BPRS is a brief, structured, repeatable interview that has essentially become the standard instrument for assessment of change in patients, usually as a function of taking some form of psychotropic medication. In the specific area of depression, the Hamilton Depression Scale ( Hamilton, 1960 ) plays a similar role. There are also several widely used interviews for patients with dementia, which generally combine a brief mental-status examination and some form of functional assessment, with particular reference to activities of daily living. The most popular of these scales are the Mini-Mental Status Examination of Folstein, Folstein, and McHugh (1975) and the Dementia Scale of Blessed, Tomlinson, and Roth (1968) . Extensive validation studies have been conducted with these instruments, perhaps the most well-known study having to do with the correlation between scores on the Blessed, Tomlinson, and Roth scale used in patients while they are living and the senile plaque count determined on autopsy in patients with dementia. The obtained correlation of .7 quite impressively suggested that the scale was a valid one for detection of dementia. In addition to these interviews and rating scales, numerous methods have been developed by nurses and psychiatric aids for assessment of psychopathology based on direct observation of ward behavior ( Raskin, 1982 ). The most widely used of these rating scales are the Nurses’ Observation Scale for Inpatient Evaluation (NOSIE-30) ( Honigfeld & Klett, 1965 ) and the Ward Behavior Inventory ( Burdock, Hardesty, Hakerem, Zubin, & Beck, 1968 ). These scales assess such behaviors as cooperativeness, appearance, communication, aggressive episodes, and related behaviors, and are based on direct observation rather than reference to medical records or the report of others. Scales of this type supplement the interview with information concerning social competence and capacity to carry out functional activities of daily living.

Again taking a long-term historical view, it is our impression that after many years of neglect by the field, the interview has made a successful return to the arena of psychological assessment but interviews now used are quite different from the loosely organized, “freewheeling,” conversation-like interviews of the past ( Hersen & Van Hassett, 1998 ). First, their organization tends to be structured, and the interviewer is required to obtain certain items of information. It is generally felt that formulation of specifically-worded questions is counterproductive rather, the interviewer, who should be an experienced clinician trained in the use of the procedure, should be able to formulate questions that will elicit the required information. Second, the interview procedure must meet psychometric standards of validity and reliability. Finally, while structured interviews tend to be atheoretical in orientation, they are based on contemporary scientific knowledge of psychopathology. Thus, for example, the information needed to establish a differential diagnosis within the general classification of mood disorders is derived from the scientific literature on depression and related mood disorders.

The rise of the interview appears to have occurred in parallel with the decline of projective techniques . Those of us in a chronological category that may be roughly described as middle-age may recall that our graduate training in clinical psychology probably included extensive course work and practicum experience involving the various projective techniques. Most clinical psychologists would probably agree that even though projective techniques are still used to some extent, the atmosphere of ferment and excitement concerning these procedures that existed during the 1940s and 1950s no longer seems to exist. Even though the Rorschach technique and Thematic Apperception Test (TAT) were the major procedures used during that era, a variety of other tests emerged quite rapidly: the projective use of human-figure drawings ( Machover, 1949 ), the Szondi Test ( Szondi, 1952 ), the Make-A-Picture-Story (MAPS) Test ( Shneidman, 1952 ), the Four-Picture Test ( VanLennep, 1951 ), the Sentence Completion Tests (e.g., Rohde, 1957 ), and the Holtzman Inkblot Test ( Holtzman, 1958 ). The exciting work of Murray and his collaborators reported on in Explorations in Personality ( Murray, 1938 ) had a major impact on the field and stimulated extensive utilization of the TAT. It would probably be fair to say that the sole survivor of this active movement is the Rorschach test. Many clinicians continue to use the Rorschach test, and the work of Exner and his collaborators has lent it increasing scientific respectability (see Chapter 17 in this volume).

There are undoubtedly many reasons for the decline in utilization of projective techniques, but in our view they can be summarized by the following points:

Increasing scientific sophistication created an atmosphere of skepticism concerning these instruments. Their validity and reliability were called into question by numerous studies (e.g., Swensen, 1957, 1968 Zubin, 1967 ), and a substantial segment of the professional community felt that the claims made for these procedures could not be substantiated.

Developments in alternative procedures, notably the MMPI and other objective tests, convinced many clinicians that the information previously gained from projective tests could be gained more efficiently and less expensively with objective methods. In particular, the voluminous Minnesota Multiphasic Personality Inventory (MMPI) research literature has demonstrated its usefulness in an extremely wide variety of clinical and research settings. When the MMPI and related objective techniques were pitted against projective techniques during the days of the “seer versus sign” controversy, it was generally demonstrated that sign was as good as or better than seer in most of the studies accomplished ( Meehl, 1954 ).

In general, the projective techniques are not atheoretical and, in fact, are generally viewed as being associated with one or another branch of psychoanalytic theory. While psychoanalysis remains a strong and vigorous movement within psychology, there are numerous alternative theoretical systems at large, notably behaviorally and biologically oriented systems. As implied in the section of this chapter covering behavioral assessment, behaviorally oriented psychologists pose theoretical objections to projective techniques and make little use of them in their practices. Similarly, projective techniques tend not to receive high levels of acceptance in biologically-oriented psychiatry departments. In effect, then, utilization of projective techniques declined for scientific, practical, and philosophical reasons. However, the Rorschach test in particular continues to be productively used, primarily by psychodynamically oriented clinicians.

The early history of objective personality tests has been traced by Cronbach (1949, 1960 ). The beginnings apparently go back to Sir Francis Galton, who devised personality questionnaires during the latter part of the 19th century. We will not repeat that history here, but rather will focus on those procedures that survived into the contemporary era. In our view, there have been three such major survivors: a series of tests developed by Guilford and collaborators ( Guilford & Zimmerman, 1949 ), a similar series developed by Cattell and collaborators ( Cattell, Eber, & Tatsuoka, 1970 ), and the MMPI. In general, but certainly not in all cases, the Guilford and Cattell procedures are used for individuals functioning within the normal range, while the MMPI is more widely used in clinical populations. Thus, for example, Cattell’s 16PF test may be used to screen job applicants, while the MMPI may be more typically used in psychiatric health-care facilities. Furthermore, the Guilford and Cattell tests are based on factor analysis and are trait-oriented, while the MMPI in its standard form does not make use of factor analytically derived scales and is more oriented toward psychiatric classification. Thus, the Guilford and Cattell scales contain measures of such traits as dominance or sociability, while most of the MMPI scales are named after psychiatric classifications such as paranoia or hypochondriasis.

Currently, most psychologists use one or more of these objective tests rather than interviews or projective tests in screening situations. For example, many thousands of patients admitted to psychiatric facilities operated by the Veterans Administration take the MMPI shortly after admission, while applicants for prison-guard jobs in the state of Pennsylvania take the Cattell 16PF. However, the MMPI in particular is commonly used as more than a screening instrument. It is frequently used as a part of an extensive diagnostic evaluation, as a method of evaluating treatment, and in numerous research applications. There is little question that it is the most widely used and extensively studied procedure in the objective personality-test area. Even though the 566 true-or-false items have remained the same since the initial development of the instrument, the test’s applications in clinical interpretation have evolved dramatically over the years. We have gone from perhaps an overly naive dependence on single-scale evaluations and overly literal interpretation of the names of the scales (many of which are archaic psychiatric terms) to a sophisticated configural interpretation of profiles, much of which is based on empirical research ( Gilber-stadt & Duker, 1965 Marks, Seeman, & Haller, 1974 ). Correspondingly, the methods of administering, scoring, and interpreting the MMPI have kept pace with technological and scientific advances in the behavioral sciences. From beginning with sorting cards into piles, hand scoring, and subjective interpretation, the MMPI has gone to computerized administration and scoring, interpretation based, at least to some extent, on empirical findings, and computerized interpretation. As is well known, there are several companies that will provide computerized scoring and interpretations of the MMPI.

Since the appearance of the earlier editions of this handbook, there have been two major developments in the field of objective personality-assessment. First, Millon has produced a new series of tests called the Millon Clinical Multiaxial Inventory (Versions I and II), the Millon Adolescent Personality Inventory, and the Millon Behavioral Health Inventory ( Millon, 1982 1985 ). Second, the MMPI has been completely revised and restandardized, and is now known as the MMPI-2. Since the appearance of the second edition of this handbook, use of the MMPI-2 has been widely adopted. Chapter 16 in this volume describes these new developments in detail.

Even though we should anticipate continued spiraling of trends in personality assessment, it would appear that we have passed an era of projective techniques and are now living in a time of objective assessment, with an increasing interest in the structured interview. There also appears to be increasing concern with the scientific status of our assessment procedures. In recent years, there has been particular concern about reliability of diagnosis, especially since distressing findings appeared in the literature suggesting that psychiatric diagnoses were being made quite unreliably ( Zubin, 1967 ). The issue of validity in personality assessment remains a difficult one for a number of reasons. First, if by personality assessment we mean prediction or classification of some psychiatric diagnostic category, we have the problem of there being essentially no known objective markers for the major forms of psychopathology. Therefore, we are left essentially with psychiatrists’ judgments. The DSM system has greatly improved this situation by providing objective criteria for the various mental disorders, but the capacity of such instruments as the MMPI or Rorschach test to predict DSM diagnoses has not yet been evaluated and remains a research question for the future. Some scholars, however, even question the usefulness of taking that research course rather than developing increasingly reliable and valid structured interviews ( Zubin, 1984 ). Similarly, there have been many reports of the failure of objective tests to predict such matters as success in an occupation or trustworthiness with regard to handling a weapon. For example, objective tests are no longer used to screen astronauts, since they were not successful in predicting who would be successful or unsuccessful ( Cordes, 1983 ). There does, in fact, appear to be a movement within the general public and the profession toward discontinuation of use of personality-assessment procedures for decision-making in employment situations. We would note as another possibly significant trend, a movement toward direct observation of behavior in the form of behavioral assessment, as in the case of the development of the Autism Diagnostic Observation Schedule (ADOS) ( Lord et al., 1989 ). The Zeitgeist definitely is in opposition to procedures in which the intent is disguised. Burdock and Zubin (1985) , for example, argue that, “nothing has as yet replaced behavior for evaluation of mental patients.”


1RM prediction: a novel methodology based on the force-velocity and load-velocity relationships

Purpose: This study aimed to evaluate the accuracy of a novel approach for predicting the one-repetition maximum (1RM). The prediction is based on the force-velocity and load-velocity relationships determined from measured force and velocity data collected during resistance-training exercises with incremental submaximal loads. 1RM was determined as the load corresponding to the intersection of these two curves, where the gravitational force exceeds the force that the subject can exert.

Methods: The proposed force-velocity-based method (FVM) was tested on 37 participants (23.9 ± 3.1 year BMI 23.44 ± 2.45) with no specific resistance-training experience, and the predicted 1RM was compared to that achieved using a direct method (DM) in chest-press (CP) and leg-press (LP) exercises.

Results: The mean 1RM in CP was 99.5 kg (±27.0) for DM and 100.8 kg (±27.2) for FVM (SEE = 1.2 kg), whereas the mean 1RM in LP was 249.3 kg (±60.2) for DM and 251.1 kg (±60.3) for FVM (SEE = 2.1 kg). A high correlation was found between the two methods for both CP and LP exercises (0.999, p < 0.001). Good agreement between the two methods emerged from the Bland and Altman plot analysis.

Conclusion: These findings suggest the use of the proposed methodology as a valid alternative to other indirect approaches for 1RM prediction. The mathematical construct is simply based on the definition of the 1RM, and it is fed with subject's muscle strength capacities measured during a specific exercise. Its reliability is, thus, expected to be not affected by those factors that typically jeopardize regression-based approaches.

Keywords: Force–velocity relationship Muscle strength assessment One-repetition maximum Resistance training.


Interactions

What researchers do know is that the interaction between heredity and environment is often the most important factor of all. Kevin Davies of PBS's Nova described one fascinating example of this phenomenon.

Perfect pitch is the ability to detect the pitch of a musical tone without any reference. Researchers have found that this ability tends to run in families and believe that it might be tied to a single gene. However, they've also discovered that possessing the gene alone is not enough to develop this ability. Instead, musical training during early childhood is necessary to allow this inherited ability to manifest itself.  

Height is another example of a trait that is influenced by nature and nurture interaction. A child might come from a family where everyone is tall, and he may have inherited these genes for height. However, if he grows up in a deprived environment where he does not receive proper nourishment, he might never attain the height he might have had he grown up in a healthier environment.


Psychology and policing: a dynamic partnership

In 1982, a small group of psychologists working in police agencies found an APA home in Div. 18 (Psychologists in Public Service). At that time, law enforcement resisted psychology. So, it was extremely gratifying when, 15 years later, police chiefs met with APA leadership seeking input on managing pressing problems that affect the quality of American policing.

Indeed, psychology has made significant inroads into improving functioning of the tradition-clad occupations that are responsible for public safety and law enforcement throughout the country. The work of the five psychologists profiled in this issue represents the breadth of services that are available to police and public safety organizations. A survey by VerHelst, Delprino and O'Regan (2002) confirms that police use of psychological services continues to grow. They support the findings of a national survey (Scrivner, 1994), which showed the impact that psychology has made on policing.

Transforming events

Police departments' acceptance of psychology reflects a major cultural shift in policing and allows other transforming events to occur. For example, psychology's resources could be applied to addressing significant national policy issues, such as the interactions between police and citizens in their communities. Consequently, the growing number of psychologists working with law enforcement argues for psychology to have an even greater influence on public policy and the delivery of police services in this country. The work of the APA Committee on Urban Initiatives (CUI) is one step in this direction. In 1998, CUI incorporated community policing into the committee's portfolio to explore the potential for this innovative police reform to improve relationships between the police and urban citizens.

Community policing, cited as one factor responsible for the dramatic decrease in crime, is based on establishing effective problem-solving partnerships with the community to prevent crime and disorder while improving the quality of life. As such, community policing promotes behavioral change. Therefore, this major criminal justice initiative has a psychological component.

CUI initiated its work by hosting a series of roundtable discussions with police chiefs in conjunction with APA's Annual Conventions. For three consecutive years, CUI met with local police chiefs and the psychologists who worked with them to determine where we could forge stronger alliances.

The dialogue covered a wide range of topics that go beyond delivery of traditional mental health services. Some examples include: identifying the types of assistance needed to end racial profiling, intervening in police brutality, strengthening police integrity and developing greater understanding of police officer fear. Other topics involved examining alternatives to arresting the homeless, responding to hate crimes, and mediation and anger-management training for front-line officers.

The roundtables also addressed psychology's research expertise. These discussions generated research ideas for studying the impact on police officers of observing violence, how violence goes home with officers to become domestic violence, and using the research literature on self-fulfilling prophecies and changing stereotypes to examine ethnic profiling. The CUI initiative came full circle at APA's 2001 Annual Convention when the San Francisco police chief and the sheriff of Los Angeles County participated in a workshop on racial profiling. They discussed their efforts to use community policing to prevent racial profiling.

Maintaining the momentum

These initiatives show steady growth in the partnership between police and psychology. However, we still have more to do to ensure that talk becomes action and influences policies on public safety. Psychology, with a knowledge base that is relevant to so many social issues and the tradition of seeking research-based solutions, is uniquely positioned to maintain this momentum and help to create better lives for people.

The events of Sept. 11 have broadened psychology's role in helping first responders and victims of this tragedy. However, they also create new roles for psychology as police increase their participation in homeland security. Psychology can be an important partner in helping police balance the delivery of law enforcement services to all citizens while facing the challenge of maintaining readiness to respond to public safety alerts.


Contents

Commentators seem to use different terms when describing the symptomatology. These terms are similar to, but not quite synonymous with, the term "Kundalini syndrome". However, they all seem to describe, more or less, the same phenomenon, or the same main features of the symptomatology.

The terms "Kundalini Syndrome" or "Physio-Kundalini Syndrome", or the references to a "syndrome", are mostly used by writers in the field of Near-Death Studies, Δ] ⎦] ⎧] but also by writers in the fields of Transpersonal Psychology, ⎟] Psychology, ⎨] and Mental Health/Psychiatry. ⎢] The terminology of "syndrome" seems to have a closer relationship to the language of medicine and statistics, than the other terminologies. The terminology of "syndrome" is also the main basis for two measuring instruments developed by Near-death researchers: The Kundalini Scale ⎩] and the Physio-Kundalini Syndrome Index. ⎪]

Other terms, such as "Kundalini awakening", is a term used by Transpersonal Psychology, ⎖] ⎫] ⎬] ⎭] but also by writers representing both the fields of Transpersonal Psychology and Near-Death Studies. ⎜] This term seems to have a closer relationship to the language of hinduism, and the yogic tradition, than the terminology of "syndrome". Greyson is one of the authors that uses both the terminology of "syndrome", ⎮] and the terminology of "awakening". Β]

Scotton ⎯] uses a term called "difficult kundalini experiences", when discussing clinical aspects of the phenomenon. Overall, he seems to prefer the term "Kundalini experience", but he also uses the terminology of "awakening". Other commentators who use the term "Kundalini experience" includes Thalbourne. ⎰] In his 1993-article Greyson reviews many of the discussions of Kundalini-symptomatology. In this review he cites, and uses, many of the similar terms associated with kundalini symptomatology, such as: "kundalini activation", "kundalini awakening", "kundalini phenomena", "kundalini activity" and "kundalini arousal". ⎱] Sanches & Daniels, ⎬] although preferring the term "kundalini awakening", also use the term "kundalini arousal" in their discussion of the phenomenon. Grabovac & Ganesan ⎲] use the term "Kundalini episodes" in their article on "Spirituality and Religion in Canadian Psychiatric Residency Training".