Valikko
Etusivu Tilaa päivän jae Raamattu Raamatun haku Huomisen uutiset Opetukset Ensyklopedia Kirjat Veroparatiisit Epstein Files YouTube Visio Suomi Ohje

Tämä on FBI:n tutkinta-asiakirja Epstein Files -aineistosta (FBI VOL00009). Teksti on purettu koneellisesti alkuperäisestä PDF-tiedostosta. Hae lisää asiakirjoja →

FBI VOL00009

EFTA00226396

453 sivua
Sivut 401–420 / 453
Sivu 401 / 453
activity. 
Transport minor to engage 
in sex activity. 
Entice minor to travel in 
interstate commerce to 
engage in sex activity. 
16 
EFTA00226796
Sivu 402 / 453
the 
City College 
of Newt!: 
INTERNATIONAL. STUDIES PROGRAM 
Jeffrey Epstein 
do Darren Indyke Esq. 
457 Madison Avenue — 14th Floor 
New York, 
N.Y. 10022. 
Dear Mr. Epstein, 
North Academic Center, Room 6/141 
160 Convent Avenue 
New York, New York 10031 
TEL 
FAX: 
www.ccny.cuny.udu 
August 21,2006. 
Thank you for your continued and generous support of the undergraduate academic 
careers of Georges Ndabashimiye and Nicole Mutesi. 
Both students have done very well both academically and in co-curricular life and expect 
to graduate in June, 2008. Georges will return to Rwanda to teach and Nicole plans to 
join the energy industry which is focused on developing Rwanda's newly found resources 
in natural gas. 
Your support of these two students will thus contribute to the human resource wealth of 
Rwanda. 
Marina W. Fernando Ph.D. 
Director, International Studies Program 
and Deputy Dean of Social Science. 
THE CITY UNIVERSITY OF NEW YORK 
EFTA00226797
Sivu 403 / 453
01107. 10C 
be 01 I 
erttirt., 
kis it , CLA rati 
res 
,yon rrv. 
ktnai 
c.ovi 
flzviyxs
clusb:/
baskkuso- beat, . 3 
Erecabi oltrea aEe detruivvrot 
Cct, ,a 
0.5•  9,6,4r 
oust_ 
Rsa, gvateck 
ws5 
I 
9tit- APAR 0,1,1 claet. rn9 acude — 
rnzt sulks 
561c/stay 
coLe M U TES 
EFTA00226798
Sivu 404 / 453
o9. / z 
cr.° 
Peo4 Set 
sk-pi j
2 4Lrblic 
rior
-rut r 
alter+ 
llne 
kruk.r
teen,' Ce gLot burn
41 
Aeltut 
rk, 
r
a Cho 
19Leida \taut 
gleol- Lois Lt.') ) 
cescate, Ncioltaolmint 
9laCUal 
4csok 
EFTA00226799
Sivu 405 / 453
ma) Va 
cish, trany) 1 rz 
\Pei \A 
3 0-1r
NalcssreS: 
k•ko 
cotift•No.t.A c(y_SN:s4 
t•N 
ker6 VAT6SI 2 
52Sa c.c.) 
fl
 
\ 
• \KRONE
S\ ukb‘js 
11/4).A 
\Not‘  
maNki, 
• -`, 1s30-14 
•a-s‘Aa% 
Cre"'S"3 
VUOAch 
I 
oz> 
;541.0 
(' l,Mv\ SA ‘A Lk. 
4-VA\ 
sva 
O CM 
‘ 1/4.00 
I 1 4e 4O1 33: it  z 
`o 
a 
Ta cum .5 sti 
4 wen 61.4-s 
\s‘val 
-Nst•tm.yr .yact\ 
?h
i t 
00* 
ks 
,r4sik 
„AN, \zs 
\-3, 3...n. 
-6.4k. 
"^"."A\ 
05 ""A .3S\ tse%3 
2cs-"")$`\ 
OA* 
•4..ht. 
cS.01 
-4 r.N.sts 
of c 
.,c..O,vc • •-•„,„..s , t:\
- • ...4k ,c\ 
acallo a e‘ r..m.s\ e 
_ant; - 
A.6q.co not, pat. 
Ci.r a cc 
c441/4\ 
Tz. 
• k  r .\ i‘ 430c 
ist•cstoor.ms..ms 
c-stsi 
4 \3 
-nckcia 
1-Y 
'WM rei 
-SI 
• 10 
vrac•-\ 'awes\ \rats 
I 
5.14 
osic 
ers 
tot\ 
\ 
-v"\c‘ i  
-a\N "a4c-S 
It'N 
C'`,C) 
jes• 
sa‘3, -rest 
',Anus -ntint" 
zm\ e* -raTo 
C.:1‘..kos 
s‘ 
wick 
ks5, 
si'a cis 
.\ t 
• A..0.e. 
stroVn-o-\ i ctoc%
.4v* 9.0A 
aSusauca 
"a Wt. cu r \ 
.\-414k 
to.l o s  
.a\ 
nOk 
\\ 
of 
Vat-WOO\ 
-1-
I 
SW 
• .40'yc-araStya•\ 
t,\\)%0 
\??0,4Cy 
o)„44 
r anssiTn- 
o.netva.\ s 
" co • 
s.4.3‘.\
.41,40(.4 
von 
3 %M.\ 
\ -‘ 0‘.\\ 
ok 
Aair
• c,.35. 
4,,,cois.)0.39is 
sec, I 
%%xi
• i iv\votaavA wsv.ov\ 
Asae\ 
- .;A\ 
(NO\ 
s4.,1 
ANArN. c4 0 1/4 
OA; 
\-
r451 0-1. 
ethy 
c4-1r, 
CA 
=
I -IsSakx•Va4aS .. 
$3\i
••.1‘,aktS 
iN Z-4 -IstZ 
• 
•• 
EFTA00226800
Sivu 406 / 453
SS 
kres..n-4 
j. 
crt...k7 
.1 
Vb.% \tr . 
I ckty.Se 
Qre..k 
C.V:\%5 
Vsek\k 
`fit Mt 
WAND 
*rt. 
cLea.,\C-Skat 
‘cou \ 3 r.t SS 
\fro 
y a „,b,, 
N„,>. 
My 
\5 
}so
\M.) y 
\irt4S.e.5 
t.\
- 
. 0, 
S‘,04ite. 
orN 
\AoA, 
(`.%)r, 
kAce 
Wc.o,A-itn )
Pr% cot.‘Aken;.S 
P°e(4°PS 
Otth r tcc1/4‘ )- 
43.-")g 
t1/4c to -,\ TN.; . 
AeXt. 
Sv-) 
ittne 
\revs 
ri:\e, 
b.\ 
.w0A 
scw.t. 
o‘. 
\„1„ 
4 O\ 
...A 
\It vw-t. 
Th‘f 
\Crto-,-) 
te. 
So\ -V\ no)  
c1/4 
\AI 
\ Sr 
t
S‘'‘)AS 
te°4W 
\S1Ncl1/4r4 ','tits-es
\NIVaNe NM- 
.eiNove 
t\- k`4.- 
sr. 
.cso.‘Th . . 
she 
14-ett) S\-cAv ttlj oviA IC 
\AD.) \$ 
c1/4114.4. 
16u 
56 nroa awj 
W\-on 
\- 
C-at<96 
\-0 
\ 
work 
mk,z) 
katn,s1Ne.s.3" 
0.4p 
o.\5°
‘acet.,‘ 
ke 
troul 
4-o 
`Ne If 
‘ vir 
\\`' 
c`f`Y 
\ `'Vsrve 
w°At 
cci" \
(r.c.t. 
ire. 
In)* 
`tic 
ynk 
‘aNtetert 
*NI 
SV4 +' 
\VT 
Wit 
\VASA' 
.1_ _._c9.,.\,.._ 
• 
yvQ ) 
\Nla 
beer te44-r-ty sQs4-ecn 
,We- 
•Vto 
3r1/4"0"V" 
100 
r 
Ivor 
let tete 
49 
WchiS45 
‘ith) 
\ 
4•42.1 
\Min 
\,A\.\ 
tz14z. 
RS 1 
IST\5• 
wo,el 
•r 
Ara 
No 4 
‘44. ,  
"r— 
oJe 
{or 
3cce. ‘C‘tc,st. sicesA-
N:\;4... 
CAI 
11\5‘0:K\ 
Oc- 
9t\rt tbrhnta 
-CtSVtict 
"Art.
 
\ja\AC 
\? 
CS'eAlfiv) 
\et 
\ 
\CS,‘.".
 - 
itt ereN 
'N 
k Nero r 
--1. 
\a 
\ 
can .:te
c) k 
ck. 
y ‘ v..A ,A O rb c 
v.).e. \ 
yen 
cvoa y 
f 
r e 
*ft 
erfeA *lite
,  
hove 
c 
SC4 
4...3d\I 
EFTA00226801
Sivu 407 / 453
447 of 1456 DOCUMENTS 
Copyright 2004 Gale Group, Inc. 
ASAP 
Copyright 2004 American Association for Artificial Intelligence 
Al Magazine 
June 22, 2004 
SECTION: No. 2, Vol. 25; Pg. 113; ISSN: 0738-4602 
IAC-ACC-NO: 119024857 
LENGTH: 7274 words 
HEADLINE: The St. Thomas common sense symposium: designing architectures for human-level 
intelligence. 
BYLINE: Minsky, Marvin; Singh, Push; Sloman, Aaron 
BODY: 
To build a machine that has "common sense" was once a principal goal in the field of artificial 
intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead. each 
developed some special technique that could deal with some class of problem well, but does poorly at 
almost everything else. We are convinced, however, that no one such method will ever turn out to be 
"best," and that instead, the powerful Al systems of the future will use a diverse array of resources that, 
together, will deal with a great range of problems. To build a machine that's resourceful enough to have 
humanlike common sense, we must develop ways to combine the advantages of multiple methods to 
represent knowledge, multiple ways to make inferences, and multiple ways to learn. We held a two-day 
symposium in St. Thomas, U.S. Virgin Islands, to discuss such a project—to develop new architectural 
schemes that can bridge between different strategies and representations. This article reports on the events 
and ideas developed at this meeting and subsequent thoughts by the authors on how to make progress. 
The Need for Synthesis in Modern Al 
To build a machine that has "common sense was once a principal goal in the field of artificial 
intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead, each 
developed some special technique that could deal with some class of problem well, but does poorly at 
almost everything else. An outsider might regard our field as a chaotic array of attempts to exploit the 
advantages of (for example) neural networks, formal logic, genetic programming, or statistical inference--
with the proponents of each method maintaining that their chosen technique will someday replace most of 
the other competitors. 
We do not mean to dismiss any particular technique. However, we are convinced that no one such 
method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a 
diverse array of resources that, together, will deal with a great range of problems. In other words, we 
should not seek a single "unified theory!" To build a machine that is resourceful enough to have humanlike 
common sense, we must develop ways to combine the advantages of multiple methods to represent 
knowledge, multiple ways to make inferences, and multiple ways to learn. 
We held a two-day symposium in St. Thomas, U.S. Virgin Islands, to discuss such a project--to 
develop new architectural schemes that can bridge between different strategies and representations. This 
article reports on the events and ideas developed at this meeting and subsequent thoughts by the authors on 
how to make progress. (1) 
Organizing the Diversity of AI Methods 
EFTA00226802
Sivu 408 / 453
Marvin Minsky kicked off the meeting by discussing how we might begin to organize the many 
techniques that have been developed in Al so far. While Al researchers have invented many 
representations, methods, and architectures for solving many types of problems, they still have little 
understanding of the strengths and weaknesses of each these techniques. We need a theory that helps to 
map the types of problems we face onto the types of solutions that arc available to us. When should one use 
a neural network? When should one use statistical teaming? When should one use logical theorem proving? 
To help answer these kinds of questions, Minsky suggested that we could organize different AL 
methods into a "causal diversity matrix" (figure I). Here, each problem-solving method, such as analogical 
reasoning, logical theorem proving, and statistical inference, is assessed in terms of its competence at 
dealing with problem domains with different causal structures. 
[FIGURE I OMITTED] 
Statistical inference is often useful for situations that are affected by many different matched causal 
components, but where each contributes only slightly to the final phenomenon. A good example of such a 
problem-type is visual texture classification, such as determining whether a region in an image is a patch of 
skin or a fragment of a cloud. This can be done by summing the contributions of many small pieces of 
evidence such as the individual pixels of the texture. No one pixel is terribly important, but en masse they 
determine the classification. Formal logic, on the other hand, works well on problems where there are 
relatively few causal components, but which are arranged in intricate structures sensitive to the slightest 
disturbance or inconsistency. An example of such a problem-type is verifying the correctness of a computer 
program, whose behavior can be changed completely by modifying a single bit of its code. Case-based and 
analogical reasoning lie between these extremes, matched to problems where there are a moderate number 
of causal components each with a modest amount of influence. Many common sense domains, such as 
human social reasoning, may fall into this category. Such problems may involve knowledge too difficult to 
formalize as a small set of logical axioms, or too difficult to acquire enough data about to train an adequate 
statistical model. 
It is true that many of these techniques have worked well outside of the regimes suggested by this 
causal diversity matrix. For example, statistical methods have found application in realms where previously 
rule-based methods were the norm, such as in the syntactic parsing of natural language text. However, we 
need a richer heuristic theory of when to apply different Al techniques, and this causal diversity matrix 
could be an initial step toward that. We need to further develop and extend such theories to include the 
entire range of Al methods that have been developed. so that we can more systematically exploit the 
advantages of particular techniques. 
How could such a "meta-theory of Al techniques" be used by an AI architecture? Before we turned to 
this question, we discussed a concrete problem domain in which we could think more clearly about the goal 
of building a machine with common sense. 
Returning to the Blocks World 
Later that first morning, Push Singh presented a possible target domain for a commonsense 
architecture project. Consider the situation of two children playing together with blocks (figure 2). 
[FIGURE 2 OMITTED) 
Even in this simple situation, the children may have concerns that span many "mental realms": 
Physical: What if I pulled out that bottom block? 
Bodily: Can I reach that green block from here? 
Social: Should I help him with his tower or knock it down? 
Psychological: I forgot where I left the blue block. 
Visual: Is the blue block hidden behind that stack? 
Spatial: Can I arrange those blocks into the shape of a table? 
Tactile: What would it feel like to grab five blocks at once? 
EFTA00226803
Sivu 409 / 453
Self-Reflective: I'm getting bored with this--at else is there to do? 
Singh argued that no present-day Al system demonstrates such a broad range of commonsense skills. 
Any architecture we design should aim to achieve some competence within each of these and other 
important mental realms. He proposed that to do this we work within the simplest possible domain 
requiring reasoning in each of these realms. He suggested that we develop our architectures within a 
physically realistic model world resembling the classic Blocks World, but where the world was populated 
by several simulated beings, and thus emphasizing social problems in addition to physical ones. These 
beings would manipulate simple objects like blocks, balls, and cylinders, and would participate in the kinds 
of scenarios depicted in figure 3, which include jointly building structures of various kinds, competing to 
solve puzzles, teaching each other skills through examples and through conversation, and verbally 
reflecting on their own successes and failures. 
[FIGURE 3 OMITTED 
The apparent s' plicity of this world is deceptive, for many of the kinds of problems that show up in 
this world have not 
t been tackled in Al, for they require combining elements of the following: 
r
Spatial reasoning about the spatial arrangements of objects in one's environment and how the parts of 
objects arc oriented and situated in relation to one another. (Which of those blocks is closest to me?) 
Physical reasoning about the dynamic behavior of physical objects with masses and 
colhding/supporting surfaces. (What would happen if I removed that middle block from the tower?) 
Bodily reasoning about the capabilities of one's physical body. (Can I reach that block without having 
to get up?) 
Visual reasoning about the world that underlies what can be seen. (Is that a cylinder-shaped block or 
part of a person's leg?) 
Psychological reasoning about the goals and beliefs oneself and of others. (What is the other person 
trying to do?) 
Social reasoning about the relationships, shared goals and histories that exist between people. (How 
can I accomplish my goal without the other person interfering?) 
Reflective reasoning about one's own recent deliberations. (What was I trying to do a moment ago?) 
Conversational reasoning about how to express one's ideas to others. (How can I explain my problem 
to the other person?) 
Educational reasoning about how to best learn about some subject, or to teach it to someone else. 
(How can I generalize useful rules about the world from experiences?) 
Many of the meeting participants were enthusiastic about this proposal and agreed that there would be 
challenging visual, spatial, and robotics problems within this domain. Ken Forbus pointed out that the 
video game communities would soon produce programmable virtual worlds that would easily meet our 
needs. Several participants mentioned the success of the RoboCup competitions (Kitano et al. 1997), but 
some concluded that the RoboCup domain, while appropriate for those interested in the problem of 
coordinating multiagent teams in a competitive scenario, was very different in character from the situation 
of two or three people more slowly working together on a physical task, communicating in natural 
language, and in general operating on a more thoughtful and reflective level. 
Still, the participants had a heated debate about the adequacy of the proposed problem domain. The 
most common criticism was that this world does not contain enough of a variety of objects or richness of 
behavior. Doug Lenat suggested a solution to this, which was to embed the people within not a Blocks 
World, but instead somewhere like a typical house or office, as in the popular computer game The Sims. 
Doug Riecken argued that we could develop enough of the architecture within the more limited virtual 
world, and later add extensions to deal with a wider range of objects and phenomena. 
A different response to this criticism was that in order to focus on architectural issues, it would help to 
simplify the problem domain, so that we could focus less on acquiring a large mass of world knowledge, 
and more on developing better ways for systems to use the knowledge they have. However, other 
EFTA00226804
Sivu 410 / 453
participants argued that restricting the world would not entirely bypass the need for large databases of 
conunonsense knowledge, for even this simple world would likely require hundreds of thousands or even 
millions of elementary pieces of commonsense knowledge about space, time, physics, bodies, social 
interactions, object appearances, and so forth. 
Other participants disagreed with the virtual world domain. They felt that we should instead take the 
more practical approach of developing the architecture by starting with a useful application like a search 
engine or conversational agent, and extending its common sense abilities over time. But Ben Kuipers 
worried that choosing too specific an application would lead to what happened to most previous projects--
someone discovers some set of ad hoc tricks that leads to adequate performance, without making any more 
general progress toward more versatile, resourceful, or "more intelligent" systems. 
In the end, after long debates we achieved a substantial consensus that to solve harder problems 
requiring common sense, we first needed to solve the more restricted class of problems that show up in 
simpler domains like the proposed virtual world. Once we get the core of the architecture functioning in 
this rich but limited domain, we can attempt to extend it--or it extend itself--to deal with a broader range of 
problems using a much broader array of commonsense knowledge. 
Large-Scale Architectures for Human-level Intelligence 
In the afternoon, we discussed large-scale architectures for machines with human-level intelligence 
and common sense. Marvin Minsky and Aaron Sloman each presented their current architectural proposals 
as a starting point for the meeting participants to criticize, debug, and elaborate. These two architectures 
share so many features that we will refer to them together as the Minsky-Sloman model. 
These architectures are distinguished by their emphasis on reflective thinking. Most cognitive models 
have focused only on ways to react or deliberate. However, to make machines more versatile, they will 
need better ways to recognize and repair the obstacles, bugs and deficiencies that result from their own 
activities. In particular, whenever one strategy fails, they'll need to have a collection of ways to switch to 
alternative ways to think. To provide for this, Minsky's architectural design includes several reflective 
levels beyond the reactive and deliberative levels. Here is one view of his model for the architecture of a 
person's mind, as described in his book, The Emotion Machine, and shown here in figure 4. 
[FIGURE 4 OMITTED) 
Some participants questioned the need for so many reflective layers; would not a single one be 
enough? Minsky responded by arguing that today, when our theories still explain too little, we should 
elaborate rather than simplify, and we should be building theories with more parts, not fewer. This general 
philosophy pervades his architectural design, with its many layers, representations, critics, reasoning 
methods, and other diverse types of components. Only once we have built an architecture rich enough to 
explain most of what people can do will it make sense to try to simplify things. But today, we are still far 
from an architectural design that explains even a tiny fraction of human cognition. 
Aaron Sloman's Cognition and Affect project has explored a space of architectures proposed as 
models for human minds; a sketch of Sloman's H-CogAff model is shown in figure 5. 
[FIGURE 5 OMITTED) 
This architecture appears to provide a framework for defining with greater precision than previously a 
host of mental concepts, including affective concepts, such as "emotion," "attitude,'' "mood," "pleasure," 
and so on. For instance, H-CogAff allows us to define at least three distinct varieties of emotions; primary, 
secondary and tertiary emotions, involving different layers of the architecture which evolved at different 
times--and the same architecture can also distinguish different forms of learning, perception, and control of 
behavior. (A different architecture might be better for exploring analogous slates of insects, reptiles, or 
other mammals.) Human infants probably have a much-reduced version of the architecture that includes 
self-bootstrapping mechanisms that lead to the adult form. 
The central idea behind the Minsky-Sloman architectures is that the source of human resourcefulness 
and robustness is the diversity of our cognitive processes: we have many ways to solve every kind of 
problem--both in the world and in the mind--so that when we get stuck using one method of solution, we 
EFTA00226805
Sivu 411 / 453
can rapidly switch to another. There is no single underlying knowledge representation scheme or 
inferencing mechanism. 
How do such architectures support such diversity? In the case of Minsky's Emotion Machine 
architecture, the top level is organized as follows. When the system encounters a problem, it first uses some 
knowledge about "problem-types" to select some "way-to-think" that might work. Minsky describes "ways-
to-think" as configurations of agents within the mind that dispose it towards using certain styles of 
representation, collections of commonsense knowledge, strategies for reasoning, types of goals and 
preferences, memories of past experiences, manners of reflections, and all the other aspects that go into a 
particular "cognitive style." One source of knowledge relating problem-types to ways-to-think is the causal 
diversity matrix discussed at the start of the meeting--for example, if the system were presented with a 
social problem, it might use the causal diversity matrix to then select a case-based style of reasoning, and a 
particular database of social reasoning episodes to use with it. 
However, any particular such approach is likely to fail in various ways. Then if certain "critic" agents 
notice specific ways in which that approach has failed, they either suggest strategies to adapt that approach, 
or suggest alternative ways-to-think, as suggested shown in figure 6. This is not done by employing any 
simple strategy for reflection and repair, but rather by using large arrays of higher level knowledge about 
where each way-to-think has advantages and disadvantages, and how to adapt them to new contexts. 
[FIGURE 6 OMITTED] 
In Minsky's design, several ways-to-think are usually active in parallel. This enables the system to 
quickly and fluently switch between different ways-to-think because, instead of starting over at each 
transition, each newly activated way-to-think will find an already-prepared representation. The system will 
rarely "get stuck" because those alternative ways-to-think will be read' to take over when the present one 
nms into trouble, as shown in figure 7. 
(FIGURE 7 OMITTED] 
Here each way-to-think involves reasoning in a particular subset of mental realms. Impasses 
encountered while reasoning in one set of mental realms can be overcome within others. Further 
information about these architectures can be found in Singh and Minsky (2003), Sloman (2001), and 
McCarthy et al. (2002). Minsky's model will be described in detail in his new book The Emotion Machine 
(Minsky, forthcoming). 
Generally, the participants were sympathetic to these proposals, and all agreed with the idea that to 
achieve human-level intelligence we needed to develop more effective ways to combine multiple AI 
techniques. Ken Forbus suggested that we needed a kind of "component marketplace," and that we should 
find ways to instrument these components so that the reflective layers of the architecture had useful 
information available to them. He contrasted the Soar project (Laird, Newell, and Rosenbloom 1987) as an 
effort to eliminate and unify components rather than to accumulate and diversify them, as in the Minsky-
Sloman proposals. Ashwin Ram and Larry Bimbaum both pointed out that despite the agreement over the 
architectural proposals it was still not clear what the particular components of the architecture would be. 
They pointed out that we needed to think more about what the units of reasoning would be. In other words, 
we needed to come up with a good list of way-to-think. Some examples might include the following: 
Solving problems by making analogies to past experiences 
Predicting what will happen next by rule-based mental simulations 
Constructing new "ways to think" by building new collections of agents 
Explaining unexpected events by diagnosing causal graphs 
Learning from problem-solving episodes by debugging semantic networks 
Inferring the state of other minds by re-using self-models 
Classifying types of situations using statistical inference 
Getting unstuck by reformulating the problem situation 
This list could be extended to include all available AI techniques. 
EFTA00226806
Sivu 412 / 453
Educating the Architecture 
On the morning of the second day of the meeting, we addressed the problem of how to supply the 
architecture with a broad range of commonsense knowledge, so that it would not have to "start from 
scratch." We all agreed that learning was of value, but we didn't all agree on where to start. Many 
researchers would like to start with nothing: however, Aaron Sloman pointed out that an architecture that 
comes with no knowledge is like a programming language that comes with no programs or libraries. 
One view that was expressed was that approaches that start out with too little initial knowledge would 
likely not achieve enough versatility in any practical length of time. Minsky criticized the increasing 
popularity of the concept of a "baby machine--learning systems designed to achieve great competence, 
given very little initial structure. Some of these ideas include genetic programming, robots that learn by 
associating sensory-motor patterns, and online chatbots that try to learn language by generalizing from 
thousands of conversations. Minsky's complaint was that the problem is not that the concept of a baby 
machine is itself unsound, but rather that we don't know how to do it yet. Such approaches have all failed to 
make much progress because they started out with inadequate schemes for learning new things. You cannot 
teach algebra to a cat; among other things, human infants are already equipped with architectural features to 
equip them to think about the causes of their successes and failures and then to make appropriate changes. 
Today we do not yet have enough ideas about how to represent, organize, and use much of commonsense 
knowledge, let alone build a machine that could learn all of that automatically on its own. As John 
McCarthy noted long ago: "in order for a program to be capable of learning something, it must first be able 
to represent that knowledge." 
There are very few general-purpose commonsense knowledge resources in the Al community. Doug 
Lenat gave a wonderful presentation of the Cyc system, which is presently the project furthest along at 
developing a useful and reusable such resource for the Al community, so that new Al programs don't have 
to start with almost nothing. The Cyc project (Lenat 1995) has developed a great many ways to represent 
commonsense knowledge, and has built a database of over a million commonsense facts and rules. 
However, Lenat estimated that an adult-level commonsense system might require 100 million units of 
commonsense knowledge, and so one of their current directions is to move to a distributed knowledge 
acquisition approach, where it is hoped that eventually thousands of volunteer teachers around the world 
will work together teach Cyc new commonsense knowledge. Lenat spent some time describing the 
development of friendly interfaces to Cyc that allow nonlogicians to participate in the complicated teaching 
and debugging processes involved in building up the Cyc knowledge base. 
of the participants agreed that Cyc would be useful, and some suggested we could even base our 
effo 
p of it, but others were sharply critical. Jeffrey Siskind doubted that Cyc contained the spatial 
and perceptual knowledge needed to do important kinds of visual scene interpretation. Roger Schank 
argued that Cyc's axiomatic approach was unsuitable for making the kinds of generalizations and analogies 
that a more case-based and narrative-oriented approach would support. Srini Narayanan worried that the 
Cyc project was not adequately based on what cognitive scientists have learned about how people make 
commonsense inferences. Oliver Steele concluded that while we disagreed about whether Cyc was 90% of 
the solution or only 10%, this was really an empirical question that we would answer during the count of 
the project. But generally, the architectural proposal was regarded as complementary to parallel efforts to 
accumulate substantial commonsense knowledge bases. 
Minsky predicted that if we used Cyc, we might need to augment each existing item of knowledge 
with additional kinds of procedural and heuristic knowledge, such as descriptions of (I) problems that this 
knowledge item could help solve; (2) ways of thinking that it could participate in; (3) known arguments for 
and against using it; and (4) ways to adapt it to new contexts. 
It was stressed that knowledge about the world was not enough by itself--we also need a knowledge 
base about how to reason, reflect and learn, the knowledge that the reflective layers of the architecture must 
possess. The problem remains that the programs we have for using knowledge are not flexible enough, and 
neither Cyc's "adult machine" approach of supplying a great deal of world knowledge, nor the "baby 
machine" approach of learning common sense from raw sensory-motor experience, will likely succeed 
without first developing an architecture that supports multiple ways to reason, learn, and reflect upon and 
improve its activities. 
EFTA00226807
Sivu 413 / 453
An Important Application 
Several of the participants felt that such a project would not receive substantial support unless it 
proposed an application that clearly would benefit much of the world. Not just an improvement to 
something existing, it would need to be one that could not be built without being capable of human-level 
commonsense reasoning. 
After a good deal of argument, several participants converged upon a vision from The Diamond Age, 
a novel by Nell Stephenson. That novel envisioned an "intelligent book"--The Young Ladies Illustrated 
Primer--that, when given to a young girl, would immediately bond with her and come to understand her so 
well as to become a powerful personal tutor and mentor. 
This suggested that we could try to build a personalized teaching machine that would adapt itself to 
someone's particular circumstances, difficulties, and needs. The system would carry out a conversation with 
you, to help you understand a problem or achieve some goal. You could discuss with it such subjects as 
how to choose a house or car, how to learn to play a game or get better at some subject, how to decide 
whether to go to the doctor, and so forth. It would help you by telling you what to read, stepping you 
through solutions, and teaching you about the subject in other ways it found to be effective for you. 
Textbooks then could be replaced by systems that know how to explain ideas to you in particular, because 
they would know your background, your skills, and how you best learn. 
This kind of application could form the basis for a completely new way to interact with computers, 
one that bypasses the complexities and limitations of current operating systems. It would use common 
sense in many different ways: (I) It would understand human goals so that it could avoid the silliest 
mistakes. (2) It would understand human reasoning so that it could present you with the right level of detail 
and avoid saying things that you probably interred. (3) It would converse in natural language so that you 
could easily talk to it about complex matters without having to learn a special language or complex 
interface. 
To build such a kind of "helping machine." we would first need to give it knowledge about space, 
time, beliefs, plans, stories, mistakes, successes, relationships, and so forth, as well as good conversational 
skills. However, little of this could be realized by anything less than a system with common sense. To 
accomplish this we would need to pursue some sequence of more modest goals that would help one with 
simpler problem types—until the system achieved the sorts of competence that we expect from a typical 
human four- or five-year-old. 
However, to get such a system to work, we would need to address many presently unsolved 
commonsense problems that show up in the model-world problem domain. 
Final Consensus 
The participants agreed that no single technique (such as statistics, logic, or neural networks) could 
cope with a sufficiently wide range of problem-types. To achieve human-level intelligence we must create 
an architecture that can support many different ways to represent, acquire, and apply many kinds of 
commonsense knowledge. 
Most participants agreed that we should combine our efforts to develop a model world that supports 
simplified versions of everyday physical, social, and psychological problems. This simplified world would 
then be used to develop and debug the core components of the architecture. Later, we can expand it to solve 
more difficult and more practical problems. 
The participants did not all agree on which particular larger-scale application would both attract 
sufficient support and also produce substantial progress toward making machines that use commonsense 
knowledge. Still, many agreed with the concept of a personalized teaching machine that would come to 
understand you so well that it could adapt to your particular circumstances, difficulties, and needs. 
Ben Kuipers sketched the diagram shown in figure 8, which captures the general dependencies 
between the three points of consensus: Practical applications depend on developing an architecture for 
commonsense thinking flexible enough to integrate a wide array of processes and representations of 
problems that come up in the model-world problem domain. 
[FIGURE 8 OMITTED] 
EFTA00226808
Sivu 414 / 453
A Collaborative Project? 
At the end of the meeting, we brainstormed about how we might organize a distributed, collaborative 
project to build an architecture based on the ideas discussed at this meeting. It is a difficult challenge, both 
technically and socially, to get a community of researchers to work on a common project. However. 
successes in the Open Source community show that such distributed projects are feasible when the 
components can be reasonably disassociated. 
Furthermore, this kind of architecture itself should help to make it easy for members of the project to 
add new types of representations and processes. However, we first would have to develop a set of protocols 
to support the interoperation of such a diverse array of methods. Erik Mueller suggested that such an 
organization could be modeled after the World Wide Web Consortium (W3C), and its job would largely be 
to assess, standardize and publish the protocols and underlying tools that such a distributed effort would 
demand. 
While we did not sketch a detailed plan for how to proceed, Aaron Sloman, Erik Mueller and Push 
Singh listed some technical steps that such a project would need: 
First, it should not be too hard to develop a suitable virtual model world, because the present-day 
video game and computer graphics industry has produced most of the required components. These should 
already include adequate libraries for computer graphics, physics simulation, collision detection, and so 
forth. 
Second. we need to develop and order the set of miniscenarios that we will use to organize and 
evaluate our progress. This would be a continuous process, as new types of problems will constantly be 
identified. 
Third, what kinds of protocols could the agents of this cognitive system use to coordinate with each 
other? This would include messages for updating representations, describing goals, identifying impasses, 
requesting knowledge, and so forth. We would consider the radical proposal to use, for this, an Interlingua 
based on a simplified form of English, rather than trying to develop some brand new ontology for 
expressing commonsense ideas. Of course. each individual agent could be free to use internally whatever 
ontology or representation scheme was most convenient and useful. 
Fourth, we would need to create a comprehensive catalog of ways-to-think, to incorporate into the 
architecture. A commonsense system should be at least capable of reasoning about prediction, explanation. 
generalization, exemplification, planning. diagnosis, reflection, debugging, learning, and abstracting. 
Fifth, what are the kinds of self-reflections that a commonsense system should be able to make of 
itself, and how should these invoke and modify ways-to-think as problems are encountered? 
Sixth, in any case, such a system will need a substantial, general-purpose, and reusable commonsense 
knowledge base about the spatial, physical, bodily, social, psychological, reflective, and other important 
realms, enough to deal with a broad range of problems within the model world problem domain. 
Finally, we might need to develop a new kind of "intention-based" programming language to support 
the construction of such an architecture. 
Towards the Future 
Since our meeting similar sentiments have been expressed at DARPA, most notably in the recent 
"Cognitive Systems" Information Processing Technology Office (IPTO) Broad Agency Announcement 
(BAA) (Brachman and Lemnios 2002), which solicits proposals for building Al systems that combine 
many elements of knowledge, reasoning, and learning. While we are gratified that architectural approaches 
are becoming more popular, we would like to see more emphasis placed on architectural designs that 
specifically support more common sense styles of thinking. 
There was a genuine sense of excitement at this meeting. The participants felt that it was a rare 
opportunity to focus once more on the grand goal of building a human-level intelligence. Over the next few 
years, we plan to develop a concrete implementation of an architecture based on the ideas discussed at this 
meeting, and we invite the rest of the Al community to join us in such efforts. 
Acknowledgements 
EFTA00226809
Sivu 415 / 453
We would like to thank Cecile Dejongh for taking care of the local arrangements, and extend a very 
special thanks to 
for making this meeting happen. This meeting was made possible by the 
generous support ofJeffrev &Mehl, 
Note 
(I.) This meeting was held in St. Thomas, U.S. Virgin Islands, on April 14-16, 2002. The meeting 
included the following participants: Larry Birnbaum (Northwestern University), Ken Forbus (Northwestern 
University), Ben Kuipers (University of Texas at Austin),,las Lanai (Cycorp), Henry Lieberman 
(Massachusetts Institute of Technology), Henry Minsky ( 
Systems), Marvin Minsky (Massachusetts 
Institute of Technology), Erik Mueller (IBM T. J. Watson Research Center), Srini Narayanan (University 
of California, Berkeley), Ashwin Ram (Georgia Institute of Technology), Doug Riecken (IBM T. J. Watson 
Research Center), Roger Schank (Carnegie Mellon University), Mary Shepard (Cycorp), Push Singh 
(Massachusetts Institute of Technology), le 
ark 
Purdue
 Aaron Sloman 
(University of Birmingham), Oliver Steele (1111Systems), 
(independent consultant), Vemor 
Vinge (San Diego State University), and Michael Witbrock (Cycorp). 
References 
Brachman, Ronald; and Lenudos, Zachary 2002. DARPA's New Cognitive Systems Vision. 
Computing Research News, 14(5): I, 8. 
Kitano, Hiroaki; Asada, Minoru; Kuniyoshi, Yasuo; Noda, Itsuki; Osawa, Eiichi; and Maisubara, 
Hitoshi. 1997. RoboCup: A Challenge problem for Al. AI Magazine, 18( I ):73-85. 
Laird, John; Newell, Allen; and Rosenbloom, Paul 1987. SOAR: An Architecture for General 
Intelligence. AI Journal, 33(1):1.64. 
Lenat, Doug. 1995. CYC: A Large-scale Investment in Knowledge Infrastructure. Communications of 
the ACM, 38(11):33-38. 
McCarthy, John; Minsky, Marvin; Sloman, Aaron; Gong, Leiguang; Lau, Tessa; Morgenstern, Leona; 
Mueller, Erik; Riecken, Doug; Singh, Moninder; and Singh, Push 2002. An Architecture of Diversity for 
Commonsense Reasoning. IBM Systems Journal, 41(3):530-539. 
Minsky, Marvin. (forthcoming). The Emotion Machine. Pantheon, New York. Several chapters are 
on-line at hitp://web.media.mit.edu/people/minsky 
Minsky, Marvin 1992. Future of AI Technologl. Toshiba Review, 47(7). 
Singh, Push ; and Minsky, Marvin. 2003. An Architecture for Combining Ways to Think. Paper 
presented at the International Conference on Knowledge Intensive Multi-Agent Systems. Cambridge, 
Mass., September 30--October 3. 
Sloman, Aaron 2001. Beyond Shallow Models of Emotion. Cognitive Processing, 1(1):530-539. 
Marvin Minsky has made many contributions to Al, cognitive psychology, mathematics, 
computational linguistics, robotics, and optics. In recent years he has worked chiefly on imparting to 
machines the human capacity for commonsense reasoning. His conception of human intellectual structure 
and function is presented in The Society of Mind which is also the title of the course he teaches at MIT. He 
received his B.A. and Ph.D. in mathematics at Harvard and Princeton. In 1951 he built the SNARC, the 
first neural network simulator. His other inventions include mechanical hands and other robotic devices, the 
confocal scanning microscope, the "Muse" synthesizer for musical variations (with E. Fredkin), and the 
first LOGO "turtle" (with S. Papert). A member of the NAS, NAE and Argentine NAS, he has received the 
ACM Turing Award, the MIT Killian Award, the Japan Prize, the MAI Research Excellence Award, the 
Rank Prize and the Robert Wood Prize for Optoelectronics, and the Benjamin Franklin Medal. 
Push Singh is a doctoral candidate in MIT's Department of Electrical Engineering and Computer 
Science. His research is focused on finding ways to give computers humanlike common sense, and he is 
presently collaborating with Marvin Minsky to develop an architecture for commonsense thinking that 
makes use of many types of mechanisms for reasoning, representation, and reflection. He started the Open 
Mind Common Sense project at MIT, an effort to build large-scale commonsense knowledge bases by 
turning to the general public, and has worked on incorporating commonsense reasoning into a variety of 
EFTA00226810
Sivu 416 / 453
real-world applications. Singh received his B.S. and M.Eng. in electrical engineering and computer science 
from MIT. 
Aaron Sloman is a professor of AI and cognitive science at the University of Birmingham, UK. He 
received his B.Sc. in mathematics and physics (Cape Town, 1956), and a D.Phil. Philosophy, from Oxford 
(1962). Sloman is a Rhodes Scholar, a Fellow of AAAI, AISB, and ECCAI. He is also author of The 
Computer Revolution in Philosophy (1978) and many theoretical papers on vision, diagrammatic 
reasoning, forms of representation, architectures, emotions, consciousness, philosophy of AI, and tools for 
exploring architectures. Sloman maintains the FreePoplog open source web site and is about to embark on a 
large EC-funded robotics project. All papers, presentations, and software are accessible from his home 
page: www.cs.bham.ac.uk/ axs/ 
RELATED ARTICLE: Establishing a Collection of Graded Miniscenarios. 
How would we guide such a project and measure its progress over time? Some participants suggested 
trying to emulate the abilities of human children at various ages. However, others argued that while this 
should inspire us, we should not use it as a plan for the project, because we don't really yet know enough 
about the details of early human mental development. 
Aaron Sloman argued that it might be better to try to model the mind of a four- or five-year-old human 
child because that might lead more directll toward more substantial adult abilities. After the meeting, 
Sloman developed the notion of a "commonsense miniscenario," a concrete description in the form of a 
simple storyboard of a particular skill that a commonsense architecture should be able to demonstrate. Each 
miniscenario has several features: (1) It describes some forms of competence, which are robust insofar as 
they can cope with wide ranges of variation in the conditions; and (2) each comes with some meta-
competence for thinking and speaking about what was done. For example competence can have a number 
of different facets, including describing the process; explaining why something was done, or why 
something else would not have worked; being able to answer hypothetical questions about what would 
happen otherwise; being able to improve performance in such ways as improving fluency, removing bugs 
in strategies. and expanding the variety of contexts. The system should also be able to further justify these 
kinds of remarks. 
Sloman proposed this example of a sequence of increasingly sophisticated such miniscenarios in the 
proposed multi-robot problem domain: 
1. Person wants to get box from high shelf. Ladder is in place. Person climbs ladder, picks up box, and 
climbs down. 
2. As for 1, except that the person climbs ladder, fords he can't reach the box because it's too far to one 
side, so he climbs down, moves the ladder sideways, then as 1. 
3. As for 1, except that the ladder is lying on the floor at the far end of the room. He drags it across the 
room lifts it against the wall, then as I. 
4. As for 1, except that if asked while climbing the ladder why he is climbing it the person ansivers: 
something like "To get the box." it should understand why "To get to the top of the ladder" or "To increase 
my height above the floor" would be inappropriate, albeit correct. 
5. As for 2 and 3, except that when asked, "Why are you moving the ladder?" the person gives a 
sensible reply. This can depend in complex ways on the previous contexts, as when there is already a ladder 
closer to the box, but which looks unsafe or has just been painted. If asked, "would it be safe to climb if the 
foot of the ladder is right up against the wall?" the person can reply with an answer that shows an 
understanding of the physics and geometry of the situation. 
6. The ladder is not long enough to reach the shelf if put against the wall at a safe angle for climbing. 
Another person suggests moving the bottom closer to the wall, and offers to hold the bottom of the ladder 
to make it safe. If asked why holding it will make it safe, gives a sensible answer about preventing rotation 
of ladder. 
7. There is no ladder, but there are wooden rungs, and rails with holes from which a ladder can be 
constructed. The person makes a ladder and then acts as in previous scenarios. (This needs further 
EFTA00226811
Sivu 417 / 453
unpacking, e.g. regarding sensible sequences of actions, things that can go wrong during the construction, 
and how to recover from them, etc.) 
8. As for 7, but the rungs fit only loosely into the holes in the rails. Person assembles the ladder but 
refines to climb up it, and if asked why can explain why it is unsafe. 
9. Person watching another who is about to climb up the ladder with loose rungs should be able to 
explain that a calamity could result, that the other might be hurt, and that people don't like being hurt. 
Such a system should be made to face a substantial library of such graded sequences of mini-scenarios 
that require it both to learn new skills, to improve its abilities to reflect on them, and (with practice) to 
become much more fluent and quick at achieving these tasks. These orderings should be based on such 
factors as the required complexity of objects, processes, and knowledge involved, the linguistic competence 
required, and the understanding of how others think and feel. That library could include all sorts of things 
children learn to do in such various contexts as dressing and undressing dolls, coloring in a picture book, 
taking a bath (or washing a dog), making toys out of Meccano and other construction kits, eating a meal, 
feeding a baby, cleaning a mess made by spilling some powder or liquid, reading a story and answering 
questions about it, making up stories, discussing behavior of a naughty person, and learning to think and 
talk about the past, the future, and about distant places, etc. 
IAC-CREATE-DATE: July 8, 2004 
LOAD-DATE: July 09, 2004 
EFTA00226812
Sivu 418 / 453
301 of 1456 DOCUMENTS 
Copyright 2005 Gale Group, Inc. 
All Rights Reserved 
ASAP 
Copyright 2005 American Academy of Arts and Sciences 
Daedalus 
June 22, 2005 
SECTION: Pg. 42(10) Vol. 134 No. 3 ISSN: 0011-5266 
ACC-NO: 135697725 
LENGTH: 5572 words 
HEADLINE: Compromised work. 
BYLINE: Gardner, Howard 
BODY: 
One would like to find an abundance of good workers across the professions: teachers who have 
mastered their subject matter. present itwell, and behave in a civil manner toward students and peers; 
physicians who are knowledgeable about the latest techniques and medications and who cater to the ill no 
matter where they are encountered and whether die' have resources; lawyers who can argue a case 
persuasivelyand who make their services available to those in need, irrespectiveof their ability to pay. 
Occasionally the impressive achievements ofsuch individuals are publicly honored; and those concerned 
about thelong-term welfare of the society hope that aspiring teachers, physicians, and lawyers will have 
ample exposure to such exemplars of good work. 
Not surprisingly, the absence of good work commands the attention of scholars, journalists, dramatists, 
politicians, and ordinary folk. We are, perhaps naturally, perhaps understandably, fascinated to learn about 
the teacher who fails an exam or seduces a student; the physician who fakes her credentials or operates on 
the wrong patient; thelawyer who skirts the law or only defends the wealthy. As a friend quipped, Time 
Warner might sell more copies if it renamed its venerable business publication Misfortune. 
In the Good Work Project in which my colleagues and I are involved,we are focusing on those 
individuals and institutions that aspire toward, and in the happiest case, exemplify, good work. There is 
much to be learned from careful study of a journalist like Edward R. Morrow, a physician like Albert 
Schweitzer, a publisher like Katharine Graham, a public servant like John Gardner (no relation). Yet it is 
important to recognize that many individuals fail to achieve good work, that some do not even strive to be 
good workers, and that in the absence of compelling role models, future workers stand little chance of 
becoming good workers themselves. Hence, it is justifiable at times to suspend our focus on good work to 
see what can be learned from frankly deviant cases. 
In what follows, I focus on what we have come to speak of as 'compromised work.' (I) We 
conceptualize this variant as work that is not,strictly speaking, illegal, but whose quality compromises the 
ethical core of a profession. We do not concern ourselves with individuals who merit the descriptor 'bad 
workers'--the journalist who steals, the physician who commits assault and battery, the lawyer who 
murders. Presumably these individuals would engage in such illegal acts irrespective of their professional 
status, and it is the job of law enforcement officials, and not of professional gate-keepers, to call these 
miscreants to account. Rather, our concern is with the journalist who makes up stories, the politician whose 
word has no warrant, the physician who fails to heed the latest medical innovations and thus provides 
substandard treatment. Each of these individuals may at one time have embraced core values—journalistic 
integrity, political veracity,medical acumen—but at some point turned his back on the profession.lf we can 
better understand how once good workers begin to compromise their work, we may be able to enhance the 
ranks of good workers. 
EFTA00226813
Sivu 419 / 453
It is easiest to spot compromised work in professions that have existed for some time and whose 
principal values are widely shared. In such domains there should be consensual processes of training, 
recognized mentors, and established procedures in place for censuring or ostracizing those whose work 
violates norms of the domain, with disbarment or loss of license as the ultimate sanction. Of the three 
professions I will treat in this essay. law is closest to the prototype, journalism is furthest (many journalists 
lack formal training), and accounting is somewhere in between. 
Since our project began (and no doubt long before), the pages of the newspapers have been filled with 
examples of compromised work; indeed, in preparing this essay I have sometimes been tempted to clip half 
the stories in the daily newspaper. Here I focus on three cases from recent years that caught both my 
attention and that of the broaderpublic. The first case involves Jayson Blair, an ambitious reporter for The 
New York Times who was fired after it was discovered he had plagiarized and fabricated stories. The 
second case centers on Hill and Barlow, a venerable Boston law fum that closed abruptly when its 
profitable real estate department announced it was leaving the firm. The third case centers on the flagship 
accounting firm Arthur Andersenthat went bankrupt after the Enron scandal of 2001. 
In my initial study of compromised work, (2) I chose these cases because they apparently represented 
three levels of analysis: Jayson Blair as an instance of compromised work by a single, flawed individual; 
Hill and Barlow as an instance of compromised work within a singleinstitution; and the Arthur Andersen--
Enron debacle as an instance of compromised work throughout a profession. My study revealed, however, 
surprising continuities across these three apparently distinct levels of analysis. In each case. I found I was 
studying individuals as well as institutions, and, indeed, an entire industry. Also to my surprise, I 
discovered that institutions held in high regard might be especially vulnerable to the insidious virus of 
compromised work; I hadexpected that such institutions harbored righting mechanisms that for some reason 
had failed to detect the of fending party. Finally, I expected that at least some instances of compromised 
work would be isolated and of relatively short duration. A far more complex and, to mymind, more 
troubling picture emerged--a picture that, moreover, reflects ominous trends in American society. 
In 1999, Jayson Blair. a young African American with a flair for writing, became a regular reporter for 
The New York Times. Even beforehis stint at the Times, Blair had been regarded by peers and supervisors 
with a combination of admiration and suspicion. There was no question that Blair wrote well, had a nose 
for important stories, was a gifted schmoozer, and had impressed the governing powers at the college and 
community newspapers where he had worked. At the same time, observers wondered whether he in fact 
had exercised the due diligence that is expected of a reporter; and indeed, supervisors had detected ahighly 
unusual number of errors in his stories. While he had occasionally been admonished for carelessness, there 
had been few consequences. In fact, at the Times, Executive Editor Howell Raines and Managing Editor 
Gerald Boyd gave increasingly important assignments to Blair. 
When Blair was discovered to have plagiarized a story from the SanAntonio Express-News, he was 
immediately forced to resign. Then on May II, 2003, in an unprecedented bout of self-examination, The 
New York Times devoted over four full pages to documentation of numerous cases of invention, 
plagiarism, and fraudulent expense and travel reports. Nor did the brouhaha over the Blair affair die down. 
Six weeks later, editors Raines and Boyd were forced to resign their posts, and the new editorial regime at 
the Times explicitly dissociated itself from the policies and practices of its predecessors. 
At first blush, Jayson Blair seemed to be an isolated case--a reporter who refused to play by the rules 
and who may well have been emotionally disturbed. And in fact, there is ample evidence that Blair was a 
troubled young man who should have been carefully scrutinized foryears. He was so unpopular at his 
college newspaper that he was relieved of his editorial position. When he was an intern at The Boston 
Globe in 1996-1997 and a freelancer there in 1998.1999, the sloppinessof his coverage was discussed. 
Shortly after he began to work full-time at the Times, Metropolitan Editor Jonathan Landman sent around a 
note that said, "We have got to stop Jayson from writing for the Times. Right now." Blair soon 
accumulated a record number of corrections and complaints about his coverage. His behavior aroused 
dislike and suspicion among many of his contemporaries. But despite ample warning signs, Raines and 
Boyd took him under their wings; he was praised andoffered ever-more important assignments. And, to the 
shame of the Times, the decisive discovery of plagiarism was made not by its own staff but by a reporter 
for a regional paper. 
EFTA00226814
Sivu 420 / 453
To be sure, Blair had been a bad egg whose misbehaviors were more flagrant than those of his 
contemporaries. But at least since publisher Arthur Sulzberger had appointed Raines as managing editor in 
2001,a strong set of explicit and implicit signals had been sent to the Times staff. Reporters were told they 
had to increase the "competitivemetabolism" of the news coverage. Those who wrote flashy, trendy stories 
were rewarded with promotions, special privileges, and ample front-page coverage. In contrast, reporters 
who took a more thoughtful, less sensational approach, who emphasized the journalistic precept 
ofcarefulness, found themselves increasingly marginalized. Nor was this new culture a secret: in a much-
discussed portrait of Raines that appeared in The New Yorker in June of 2002, the changing milieu at 
theTimes was detailed and critiqued. 
Had Jayson Blair been a truly isolated case, it is highly likely that the Sulzberger-Raines-Boyd 
managerial team would have survived intact and perhaps continued its questionably hectic pace and 
excessively dramatic bent. Once the Blair case broke, however, other heroes and casualties soon emerged. 
The most flagrant consequence was the abrupt resignation of star reporter Rick Bragg, who was accused of 
using unacknowledged stringers and of embellislitis lengthy and highly evocative stories. While Raines 
and Boyd fought to keep their positions. it was 
inevitable that sooner or later they would be 
squeezed out. The replacement appointment of Bill Keller, an individual widely considered a contrast in 
temperament and journalistic values, served as a sign that the Times was rejecting the go-go atmosphere of 
the previous few years. 
Under Raines and Boyd, the Times had been engaged in an example ofwhat I will call 'superficial 
alignment' The editors were looking for young reporters who exemplified the pace and coverage they 
sought;the fact that Blair was African American was a bonus and, by the editors' own admission, caused 
them to cut him slack. For his part, Blair was keen at discerning what his editors desired; and, as befits an 
accomplished con man, he knew how to give the impression of good workand to cover his tracks. What 
both sides avoided in this pas de deuxwas a genuine alignment that honored the tried-and-true mission of 
journalism. Had Blair been subjected to a minoring regime of tough love, he might have turned into a 
genuine' good reporter. And had he somehow slipped through an otherwise well-regulated training and 
supervision system, it is unlikely that the discovery of his misdeeds would have caused such turmoil in his 
company and, indeed, in the wider journalistic profession. 
During the second week of December of 2002, residents of Boston were astonished to learn that the 
prestigious law firm Hill and Barlow had closed down the previous weekend. The farm had been in 
existence for over a century, was esteemed in the community, and comprised in its legal ranks many 
prominent citizens, including at various times three governors of the Commonwealth. With their deep 
involvement in thecommunity--exemplified by their defense in the famous Sacco-Vanzetticase of the 
I 920s--Hill and Barlow partners epitomized what legal scholar Anthony Kronman has called "lawyer 
statesmen." For outsiders, there was little reason to suspect any significant problems at Hill and Barlow--
and none whatsoever to prepare them for its sudden dissolution. 
A word about partnerships is in order here. Examination of about twelve hundred interviews in the 
eight domains considered in the Good Work Project reveals that only lawyers speak regularly about 
partnerships. In part a financial arrangement, in part a social network, the partnership serves as the locus for 
daily activity, the attraction andsharing of clients, and the mechanism for services and payment. The 
transition from associate to partner is the legal equivalent of the attainment of tenure in the academy; and in 
many ways, partners behavelike members of a faculty. Young lawyers serve as associates until, assuming a 
good record and available slots, they arc welcomed into the partnership, which is likely to be their home for 
the remainder of their professional lives. It goes without saying that the health and stability of the 
partnership is crucial for its constituent members, staff, and clients. 
Each partnership has an institutional culture, passed on both explicitly and implicitly from the older 
partners to the new members of the association. By all reports, the institutional culture of the Hilland 
Barlow of old stressed intellectual and legal excellence; community service, including the holding of 
elected or appointed office; and a willingness to earn somewhat less money than competitors, in return for a 
lifestyle that was more balanced and that went beyond the sheer number and rate of billable hours. (3) 
Outsiders' initial reaction to the sudden closure of Hill and Barlow was shock. After all, this was a 
partnership that had been highly esteemed for decades. To observers and the media, it appeared that overly 
avaricious lawyers from the real estate division had issued a fait accompli to their bewildered colleagues, 
EFTA00226815
Sivut 401–420 / 453