Memorandums

知覚・認知心理学の研究と教育をめぐる凡庸な日々の覚書

JavaScript-STAR:分散分析(被験者内3要因まで)

2005-06-30 | Education for 3,4年
3要因被験者内計画の分散分析と、単純主効果の検定および多重比較までが可能なプログラム(フリーウェア)。
例題があるので、試してみるとよい。

References
JavaScript-STAR
www.kisnet.or.jp/nappa/software/star/index.htm
コメント (1)
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Screen WaitBlanking: PsychToolBox

2005-06-30 | PsychToolBox
>>Screen WaitBlanking?

Usage:

framesSinceLastWait=Screen(windowPtrOrScreenNumber,'WaitBlanking',[waitFrames])

Wait specified number of blankings (frame endings). Call with waitFrames==1 (or
omit it, since that's the default) to wait for the beginning of the next frame.
Video cards mark the end of each video frame by briefly reducing the voltage to
the Vertical Blanking Level (VBL), which "blanks" the screen to black. We do all
video timing relative to the beginning of blanking.
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Rush 使い方: PsychToolBox

2005-06-30 | PsychToolBox
>>help Rush

Rush(string,[priorityLevel]) % or alternate calling form below

Rush.mex runs a critical bit of your Matlab code with minimal
interruption by Macintosh interrupt tasks. The first argument is a
string containing Matlab code to be passed to EVAL. Within the string,
you can have multiple statements separated by ";" or ",".

The optional "priorityLevel" argument specifies how much to block
interrupt tasks. The allowed values are 0, 0.5, 1, 2, 3, 4, 5, 6, and 7.
A priorityLevel of 0 gives normal execution: simply calls EVAL. Rush
offers two approaches to minimizing interruption, selected by setting
priorityLevel 0.5 (the default) or higher (1 ... 7). Both approaches
temporarily block the processing of deferred tasks, which lessens
interruption of your code. ("Deferred" tasks are called by the Mac OS to
do the time-consuming work scheduled by an interrupt service routine.)
Setting priorityLevel>0.5 also blocks interrupts, blocking more
interrupts as the priorityLevel is raised higher. Raising priority
disables important functions, which is okay if your rushed code doesn't
use them.

Use MaxPriority to determine the highest priority that allows normal
operation of the functions you use, e.g. Snd and Screen 'WaitBlanking'.
We suggest you always call MaxPriority rather than hard coding any
particular priorityLevel, so that your program will gracefully adapt to
run optimally on any computer. Here's a typical use:

Screen('Screens'); % Make sure all functions (Screen.mex) are in memory.
i=0; % Allocate all variables.
loop={
'for i=1:100;'
'Screen(window,''WaitBlanking'');'
'Screen(''CopyWindow'',w(i),window);'
'end;'
};
priorityLevel=MaxPriority(window,'WaitBlanking');
Rush(loop,priorityLevel);
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

screen関数の例2 : PsychToolBox

2005-06-30 | PsychToolBox
>>screen screens?

Usage:

screenNumbers=Screen('Screens')

Return an array of screenNumbers.
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

MovieDemo 2 :PsychToolBox

2005-06-30 | PsychToolBox
MovieDemo 後半

% Show the movie again, now using Rush to minimize interruptions.
loop={
'for i=1:length(w);'
'Screen(window,''WaitBlanking'');'
's(i)=GetSecs;'
'Screen(''CopyWindow'',w(i),window,rect,rect2);'
'end;'
};
priorityLevel=MaxPriority(screenNumber,'WaitBlanking');
Screen(window,'FillRect');
Screen(window,'DrawText',sprintf('Showing movie at priority %g ...?n',priorityLevel),10,30);
i=0;Screen('Screens'); % Make sure all Rushed variables and functions are in memory.
Rush(loop,priorityLevel);
s=diff(s);
frames2=sum(s)*FrameRate(screenNumber)-length(s);
ShowCursor;
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Screen('FillOval'): PsychToolBox

2005-06-30 | PsychToolBox
>>Screen FillOval?

Usage:

Screen(windowPtr,'FillOval',[color],[rect])

Fills an ellipse with the given color, inscribed within "rect"."color" is the
clut index (scalar or [r g b] triplet) that you want to poke into each pixel;
default produces white with the standard CLUT for this window's pixelSize.
Default rect is whole window.
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

screen関数の例1 : PsychToolBox

2005-06-30 | PsychToolBox
>>Edit MovieDemo

前半----
% Open a window
screenNumber=0;
pixelSize=8;
[window,windowRect]=Screen(screenNumber,'OpenWindow',[],[],pixelSize);
n=300;
% n=min(windowRect(RectBottom),2*round((Bytes*2-1e6)^0.33/2)); % make movie as big as memory will allow.
rect=[0 0 n n];
rect2=AlignRect(rect,windowRect,RectRight,RectBottom);

waitSecs(1); % Give display a moment to recover from the change of pixelSize

% Make a movie by drawing disks into 1+n/2 offscreen windows.
black=BlackIndex(window);
for i=1:(1+n/2)
w(i)=Screen(window,'OpenOffscreenWindow',[],rect);
w(n+2-i)=w(i);
r=[0 0 2 2]*(i-1);
Screen(w(i),'FillOval',black,r);
end

% Show the movie, first forwards, then backwards.
Screen(window,'TextSize',24);
Screen(window,'DrawText','Showing movie at priority 0 ...',10,30);
HideCursor;
for i=1:length(w)
Screen(window,'WaitBlanking');
s(i)=GetSecs;
Screen('CopyWindow',w(i),window,rect,rect2);
end
s=diff(s);
frames1=sum(s)*FrameRate(screenNumber)-length(s);
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

screen 関数: PsychToolBox

2005-06-30 | PsychToolBox
>>help screen

Screen is a MEX file to use on- and off-screen windows for display in
experiments. Screen has many functions; type "Screen" for a list:
Screen
For explanation of any particular screen function, just add a question
mark "?". E.g. for 'OpenWindow', try either of these equivalent forms:
Screen('OpenWindow?')
Screen OpenWindow?
All the Screen Preference settings are documented together:
Screen Preference?


Each on-screen window normally fills a monitor's whole screen. (The OS9
version allows smaller windows; the Win version doesn't.) Off-screen
windows are invisible, but useful as an intermediate place to create and
store images for later display. Copying from window to window is very
fast, e.g. 36 MB/s on a PowerMac 7500/100 and 171 MB/s on a PowerBook
G4/500. It's easy to precompute a series of off-screen windows and then
show them as a movie, in real time, one per video frame.

Screen ARGUMENTS

"windowPtr" argument: Screen 'OpenWindow' and 'OpenOffscreenWindow' both
return a windowPtr, a number that designates the window you just
created. You can create many windows. And you can obtain a windowPtr to
any of Matlab's windows. To use a window, you pass its windowPtr to the
Screen function you want to apply to that window.

"rect" argument: "rect" is a 1x4 matrix containing the upper left and
lower right coordinates of an imaginary box containing all the pixels.
Thus a rect [0 0 1 1] contains just one pixel. All screen and window
coordinates follow Apple Macintosh conventions. (In Apple's the pixels
occupy the space between the coordinates.) Coordinates can be local to
the window (i.e. 0,0 origin is at upper left of window), or local to the
screen (origin at upper left of screen), or "global", which follows
Apple's convention of treating the entire desktop (all your screens) as
one big screen, with origin at the upper left of the main screen, which
has the menu bar. You can rearrange the screens in the desktop by using
Apple's Control Panel: Monitors or Monitors and Sounds. Historically
we've had two different orderings of the elements of rect, so, for
general compatibility, all of the Psychophysics Toolbox refers to the
elements symbolically, through RectLeft, RectTop, etc. Since 2/97, we
use Apple's standard ordering: RectLeft=1, RectTop=2, RectRight=3,
RectBottom=4.

[optional arguments]: Brackets in the function list, e.g. [color],
indicate optional arguments, not matrices. Optional arguments must be in
order, without omitting earlier ones, but you can use the empty matrix
[] as a place holder, with the same effect as omitting it.



HALTING A PROGRAM

OS9:
Command-period halts any program. (Type a period "." while holding the
apple-cloverleaf "command" key down.) If the command-period is
intercepted by any of our MEX files, all of Screen's windows will be
closed, and the cursor will be shown, to allow you to work normally in
the Matlab Command window.
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

KbCheck : PsychToolBox

2005-06-30 | PsychToolBox
>>help KbCheck

[keyIsDown,secs,keyCode] = KbCheck

Return keyboard status (keyIsDown), time (secs) of the status check, and
keyboard scan code (keyCode).

keyIsDown 1 if any key, including modifiers such as <shift>,
<control> or <caps lock> is down. 0 otherwise.

secs time of keypress as returned by GetSecs.

keyCode OS9: a 128-element logical array. Each bit within the
logical array represents one keyboard key. If a key is
pressed, its bit is set, othewise the bit is clear. To
convert a keyCode to a vector of key numbers use
FIND(keyCode). To find a key's keyNumber use KbName
or KbDemo.

See also: FlushEvents, KbName, KbDemo, KbWait, GetChar, CharAvail, KbDemo.

David Brainard and Allen Ingling
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

顔知覚:線形モデルの検証

2005-06-25 | Research: Face
A Test of the Linear Feature Model

...
According to the m-out-of-n principle, the categorization rule can be defined as the quantitative combination (or sum) of feature values, plus a threshold criterion that separates the two categories. By this definition, each pattern with a negative feature sum (n=31) belongs to one category (NEG), while each pattern with a positive feature sum (n=31) belongs to the other category (POS).
...
As predicted by Wittgenstein and Ryle, the instances of a polymorphous class are not equally valid members of the categories to which they belong. Typicality of category membership varies along the summary dimension; one pattern in each class (with an absolute feature sum of four) is a perfect category member, patterns with an absolute sum of three are very typical, those with an absolute sum of two are intermediate, and those with an absolute sum of one are poor members.

Therefore, the only way to classify the stimuli exactly is to respond to their "feature summary". In order for successful categorization to occur we predicted that the pigeons must be able to:

Extract information about, or attend to, all four features of class membership

Equalize the weights given these features (or resist selective attention)

Combine this feature information in an additive manner.

References
Visual Categorization in Pigeons
Ludwig Huber/Institute of Zoology, University of Vienna
http://www.pigeon.psy.tufts.edu/avc/huber/feature.htm#A%20test
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

表情知覚: the Linear Feature Model

2005-06-24 | Research: Face
Quarterly-Journal-of-Experimental-Psychology:-Comparative-and-Physiological-Psychology. Aug 1997; Vol 50B (3): 253-268

"Categorical Discrimination of Human Facial Expressions by Pigeons: A Test of the Linear Feature Model"

Masako Jitsumori and Masato Yoshihara

Abstract:

Pigeons were trained to discriminate human facial expressions, happiness and anger, in a go/ no-go discrimination procedure. Five pigeons learned to discriminate photographs of the happy and angry faces of 25 different people and showed high levels of transfer to novel faces expressing the training emotions. The pigeons directed their pecks predominantly to the mouth, eyes, or the area between these features. The pigeons were then tested with familiar stimuli in which the upper and lower parts of the face were manipulated separately by substitution or removal of facial features (''eyes-and-eyebrows'' and ''mouth''). It was shown that the salience of particular features differed considerably among the birds, but that a linear feature model adequately accounted for discriminative performance of the birds with these stimuli. Furthermore, the discrimination was maintained when these features were inverted. Thus, the so-called Thatcher illusion did not occur. It is suggested that the discrimination is based not on a feature configuration or perceptual gestalt but on an additive integration of individual features.

コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

表情知覚: the fuzzy logical model

2005-06-24 | Research: Face
Journal of Experimental Psychology: Human Perception and Performance 1997, Vol. 23, No. 1, 213-226

"Featural Evaluation, Integration, and Judgment of Facial Affect "

John W. Ellison and Dominic W. Massaro University of California, Santa Cruz

The paradigm of the fuzzy logical model of perception (FLMP) is extended to the domain of perception and recognition of facial affect. Two experiments were performed using a highly realistic computer-generated face varying on 2 features of facial affect. Each experiment used the same expanded factorial design, with 5 levels of brow deflection crossed with 5 levels of mouth deflection, as well as their corresponding half-face conditions, for a total stimulus set of 35 faces. Experiment 1 used a 2-alternative, forced-choice paradigm (either happy or angry), whereas Experiment 2 used 9 rating steps from happy to angry. Results indicate that participants evaluated and integrated information from both features to perceive affective expressions. Both choice probabilities and ratings showed that the influence of 1 feature was greater to the extent that the other feature was ambiguous. The FLMP fit the judgments from both experiments significantly better than an additive model. Our results question previous claims of categorical and hollstic perception of affect.

References
Perceptual Science Laboratory at the University of California - Santa Cruz .
http://mambo.ucsc.edu/
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Mr. Destiny

2005-06-23 | Life
"I make suggestions.
You make a choice.
That's destiny. "

"Who are you?
Angel ? "

"Not ... exactly."

コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Murray, Sekuler, & Bennett (2001)

2005-06-23 | Research: V. Interp.
Psychonomic Bulletin & Review2001, 8 (4), 713-720
"Time course of amodal completion revealed by a shape discrimination task"
RICHARD F. MURRAY, ALLISON B. SEKULER, andPATRICK J. BENNETT

GENERAL DISCUSSION
The finding that amodal completion can affect performance in perceptual tasks, but only if given enough time,is consistent with earlier reports. Our estimates of the time required for completion ranged from 46 to 114msec, and averaged to 75msec.Although our results clearly indicate a time course for completion, our estimate of its duration should not be taken as a fixed constant of visual processing. It is becoming increasingly clear that the time required for completion is not fixed but varies with task and stimulus.
Sekuler and Palmer’s (1992) first primed-matching studyfound amodal completion to require 100-200msec;however, in a later primed-matching study, Guttman and Sekuler (2001) found that completion time varied fromless than 75msec to over 200msec, depending on howmuch of the stimulus was occluded. Shore and Enns (1997) manipulated the amount of occlusion in theirstimuli, and they also found shorter completion times forsmaller amounts of occlusion. Using performance in a shape discrimination task as a measure of completion,Ringach and Shapley (1996) found amodal completionto require 120-170msec. This is longer than the estimate we found using similar methods, but their stimuli werevery large (17°×17°) and highly occluded (80%), so if completion time increases with the amount of completion required, a longer time course would be expected(Guttman & Sekuler, 2001; Shore & Enns, 1997). Although it is difficult to compare the completion times obtained in different studies directly, the fact that studies with different methods and stimuli all conclude thatamodal completion has a rapid but measurable time course provides converging evidence and makes it unlikely that the time course is an artifact of the methodsemployed.

コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする

Time course of completion

2005-06-23 | Research: V. Interp.
The time course of visual completion measured by response classification
E. Shubel & J.M. Gold Vision Sciences Society (Poster) 2003
http://vislab.psych.indiana.edu/~jgold/jgold/jmg/presentations.html

cf. Time course of completion
Murray RF, Sekuler AB, & Bennett PJ (2001). Time course of amodal completion revealed by a shape discrimination task. Psychonomic Bulltin & Review, 8(4); 713-720. : 46-114ms
Ringach DL, Shapley R (1996) Spatial and temporal properties of illusory contours and amodal completion. Vision Research 36(19): 3037-50. : 120-170ms
Sekuler AB & Palmer SE (1992). Perception of partly occluded objects: A microgenetic analysis. JEP: General, 121:95-111. : 100-200ms
コメント
  • X
  • Facebookでシェアする
  • はてなブックマークに追加する
  • LINEでシェアする