forked from laurentperrinet/2019-01-16_LACONEU
-
Notifications
You must be signed in to change notification settings - Fork 0
/
2019-03-25_HDR_RobinBaures.py
540 lines (443 loc) Β· 27.1 KB
/
2019-03-25_HDR_RobinBaures.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
# - WP1: Theoretical predictions, fit of model parameters to experimental data and (if applies) stimulus production (lead: Marseille, particip: Berlin) TO BE WRITTEN BY LAURENT, LYLE & JENS (ca. Β½ page per WP)
# Theoretical advances in neural networks modelling have recently been pushed by the field of machine learning. These have proposed that biological neural networks could process information according to certain constraints of efficiency, such as minimizing the surprise of sensory states given noisy or ambiguous sensory inputs. We have developed such a Hierarchical (Deep) Predictive Coding model using unsupervised learning on (static) natural images, and that we wish to extend to (dynamic) sensory flows. Compared to classical Deep Learning models, one advantage of this approach is that the learned synaptic weights are meaningful, that is that they often emerge as to represent explicitly components of the input image in increasing levels of complexity: edges at the first layer, curvatures in the second, the eye or mouth in the third layer when presenting images of faces. Such networks can also be extended to supervised or reinforcement learning by adding efferent layers that would accept such labels.
#
# The free parameters of this network are the anatomical constraints that we impose to dimension layers. In the particular case of this networks, this is the number of layers, the number of channels (that is the number of overlays of the same map in a given layer), and the size of the kernels implementing the convolutions (in resemblance to receptive fields in biological NNs). Indeed, as in most classical Deep Learning neural networks, these parameters are hand tuned. However, we observed that these superstructural numbers have a great influence on the quality and type of filters that could emerge from the learning phase. Theoretically, this make sense as it imposes different levels of competition within each layer across channels but also by defining a characteristic size in local interactions.
# We believe that this theoretical problem is similar to that faced by different the primary visual cortex in different species. Given a similar sensory input, the intra-cortical network imposes different levels of anatomical complexities. We predict that, given realistic orders of magnitudes for different species, we could differentially predict the emergence of different macrostructures in different species. In particular, the emergence of a topographically smooth representation of orientations is present in marmosets while mice have a salt-and-pepper representation. Thanks to this level of understanding, we will use our models to make predictions. In particular, we can generate images which are optimal to evoke activity targeted at a given layer, for instance for the second layer (curvatures) versus the first layer (edges). In practice, such images take the form of textures with different levels of structural complexity. For instance, stimuli targeted at the first layer would correspond to a sub-class of stimuli that we have widely used in previous experiments (Motion Clouds). We predict that more complex classes of stimuli should evoke differential activities in both species.
# - Task 5: Bluesky: βEfficient hierarchical representations β Overall, the role of visual information processing in cortical layers is to represent information robustly to noise and visual transformations (translations zooms rotations). There is thus a pressure on the structure of this processing at different temporal scales to maximize the efficiency of this code. This depends both on the statistics of the sensory input (both in space and time) but also on the repertoire of behavioral tasks that are embodied. We have recently developed a hierarchical unsupervised algorithm for neural networks which allows to generate such representations and that fits well to the processing performed by feed-forward, lateral and feedback connections. In such a model, the message passing between neurons is implemented by propagating waves which progressively refine the representation of sensory data. In particular, tuning the complexity of the representation at different layers allows to see the emergence of different structures. In particular, progressively increasing the complexity in lateral connections shows a phase transition from a salt-and-pepper organization to a more continuous representation of the space x orientation space. This could explain the different architectures observed in different animals. Such models could also make predictions on the adaptation of the network to novel environmental or behavioral contingencies. The goal of this Task is two-sided: first to understand the physiological and behavioral data in the light of predictive coding theories, rooting these results in a quantitative model. The second goal is to generate predictions. In particular, using our models, it will be possible to generate novel stimulus classes that specifically tackle the efficiency of the visual system at its different spatial and dynamic granularities. Lead Laurent/Lyle?
#
__author__ = "Laurent Perrinet INT - CNRS"
__licence__ = 'BSD licence'
DEBUG = True
DEBUG = False
import os
home = os.environ['HOME']
figpath_talk = 'figures'
figpath_slides = os.path.join(home, 'nextcloud/libs/slides.py/figures/')
#
import sys
print(sys.argv)
tag = sys.argv[0].split('.')[0]
if len(sys.argv)>1:
slides_filename = sys.argv[1]
else:
slides_filename = None
print('π Welcome to the script generating the slides for ', tag)
YYYY = int(tag[:4])
MM = int(tag[5:7])
DD = int(tag[8:10])
# see https://github.com/laurentperrinet/slides.py
from slides import Slides
height_px = 80
height_ratio = .7
meta = dict(
embed = True,
draft = DEBUG, # show notes etc
width= 1600,
height= 1000,
# width= 1280, #1600,
# height= 1024, #1000,
margin= 0.1618,#
reveal_path='https://cdnjs.cloudflare.com/ajax/libs/reveal.js/3.7.0/',
theme='simple',
bgcolor="white",
author='Laurent Perrinet, INT',
author_link='<a href="https://laurentperrinet.github.io">Laurent Perrinet</a>',
short_title='Efficient coding of visual information in neural computations',
title='Efficient coding of visual information in neural computations',
conference_url='http://www.laconeu.cl',
short_conference='LACONEU 2019',
conference='LACONEU 2019: 5th Latin-American Summer School in Computational Neuroscience',
location='Valparaiso (Chile)',
YYYY=YYYY, MM=MM, DD=DD,
tag=tag,
url=f'https://laurentperrinet.github.io/{tag}',
abstract="""
""",
wiki_extras="""
----
<<Include(BibtexNote)>>
----
<<Include(AnrHorizontalV1Aknow)>>
----
TagYear{YY} TagTalks [[TagAnrHorizontalV1]]""".format(YY=str(YYYY)[-2:]),
sections=['Efficiency, vision and neurons',
'Sparse coding in the retina?',
'Sparse Hebbian Learning']
)
# https://pythonhosted.org/PyQRCode/rendering.html
# pip3 install pyqrcode
# pip3 install pypng
import pathlib
pathlib.Path(figpath_talk).mkdir(parents=True, exist_ok=True)
figname = os.path.join(figpath_talk, 'qr.png')
if not os.path.isfile(figname):
import pyqrcode as pq
code = pq.create(meta['url'])
code.png(figname, scale=5)
print(meta['sections'])
s = Slides(meta)
###############################################################################
# ππππππππ intro ππππππππ
###############################################################################
i_section = 0
s.open_section()
###############################################################################
s.hide_slide(content=s.content_figures(
#[os.path.join(figpath_talk, 'qr.png')], bgcolor="black",
[os.path.join(figpath_slides, 'mire.png')], bgcolor=meta['bgcolor'],
height=s.meta['height']*.90),
#image_fname=os.path.join(figpath_aSPEM, 'mire.png'),
notes="""
Check-list:
-----------
* (before) bring VGA adaptors, AC plug, remote, pointer
* (avoid distractions) turn off airport, screen-saver, mobile, sound, ... other running applications...
* (VP) open monitor preferences / calibrate / title page
* (timer) start up timer
* (look) @ audience
http://pne.people.si.umich.edu/PDF/howtotalk.pdf
""")
intro = """
<h2 class="title">{title}</h2>
<h3>{author_link}</h3>
""".format(**meta)
intro += s.content_imagelet(os.path.join(figpath_slides, "troislogos.png"), s.meta['height']*.2) #bgcolor="black",
intro += """
<h4><a href="{conference_url}">{conference}</a>, {DD}/{MM}/{YYYY} </h4>
""".format(**meta)
s.hide_slide(content=intro)
s.hide_slide(content=s.content_figures([figname], cell_bgcolor=meta['bgcolor'], height=s.meta['height']*height_ratio) + '<BR><a href="{url}"> {url} </a>'.format(url=meta['url']),
notes=" All the material is available online - please flash this QRcode this leads to a page with links to further references and code ")
s.add_slide(content=intro,
notes="""
* (AUTHOR) Hello, I am Laurent Perrinet from the Institute of Neurosciences of
la Timone in Marseille, a joint unit from the CNRS and the AMU
* (OBJECTIVE) in this talk, I will be focus in highlighting
some key challenges in understanding visual perception
in terams of efficient coding
using modelization and neural data
* please interrupt
* (ACKNO) this endeavour involves different techniques and tools ...
From the head on, I wish to thanks people who collaborated and in particular ..
mostly funded by the ANR horizontal V1
(fregnac chavane) + ANR TRAJECTORY (o marrre bruno cessac palacios )
+ LONDON (Jim Bednar, Friston)
* (SHOW TITLE) I am interested in the link
between the neural code and the structure of the world.
in particular, for vision, I am researching
the relation between the
functional (in terms of the inferential processes leading to behaviour)
organization (anatomy and activity)
of low-level visual areas (V1) and the structures of natural scenes,
that is of the images that hit the retina and which are
relevant to visual perception in general.
so what is vision efficient? in particular if we look around us,
images are formed by a relatively low number of features, which are arranged
according to prototypical structures - lines, curves, contours
""")
review_bib = s.content_bib("LP", "2015", '"Sparse models" in <a href="http://invibe.net/LaurentPerrinet/Publications/Perrinet15bicv">Biologically Inspired Computer Vision</a>')
figpath = os.path.join(home, 'Desktop/2017-01_LACONEU/figures/')
s.add_slide(content="""
<video controls loop width=60%/>
<source type="video/mp4" src="{}">
</video>
""".format(#'figures/MP.mp4'), #
s.embed_video(os.path.join(figpath, 'MP.mp4'))),
notes="""
... this video shows this intuition in a quantitative way. from a natural image,
we extracted independent sources as individual edges at different scales and
orientations
when we reconstruct this image frame by frame (see N)
we can quickly recognize the image
natural images are sparse
""")
ols_bib = s.content_bib("Olshausen and Field", "1997", 'Sparse coding with an overcomplete basis set: A strategy employed by V1?')
for i in [1, 2, 5]:
s.add_slide(content=s.content_figures(
[os.path.join(figpath_talk, 'Olshausen_'+ str(i) + '.png')], bgcolor="white", embed=False,
title=None, height=s.meta['height']*.85) + ols_bib,
notes="""
a seminal idea is proposed by Olshausen:
* this may be formalized as an inference problem:
edges are different sources, which are known to be sparse:
by mixing these sources one forms the image (transparency hypothesis)
the sparseness is characterized by the pdf of the sources coefficients
* this inference problem is an inverse problem:
* while this problem may be hard to solve, this may be approached using
a (conjugate) gradient descent which has a nice implementation in terms of
neural networks
I wish to point here to an essential feature compared to classical feed-forward
networks: you can not do the inference in one single shot in general;
you need a recurrent / recursive network (see arrow) and this is precisely
a possible function for one of the most numerous type of synapses: short-ranges
lateral interactions
""")
figpath = os.path.join(home, 'pool/science/PerrinetBednar15/talk/')
# anatomical
s.add_slide(content=s.content_figures(
[os.path.join(figpath, 'Bosking97Fig4.jpg')], title=None,
height=s.meta['height']*.85) +
s.content_bib("Bosking et al.", "1997", " Journal of Neuroscience"),
notes="""
in the primary visual cortex for instance,
the set of long-range lateral connections between neurons, which could
act to facilitate detection of contours matching the association field, and/or
inhibit detection of other contours. To fill this role, the lateral connections
would need to be orientation specific and aligned along contours, * (colin) and
indeed such an arrangement has been found in treeshrew's primary visual
cortex (Bosking et al., J Neurosci 17:2112-27, 1997)
* (neural) if one looks at the primary visual area in the occipital lobe of the cortex
using optical imaging as here in the treeshrew by Bosking and colleagues under the
supervision of DF, one could represent the distributed, topographical representation
of orientation selectivity. in (A) and (B) the orientation giving the most response
at each cortical position is represented by hue using the code below from orange for
horizontal to blue for verticals, and typical structures are magnified in (C): stripes
(on the periphery) and pinwheels. You can understand this as a packing of a 3D feature
space on the 2D surface of the cortex.
* (method) Tree shrew orientation preference maps were obtained using optical imaging.
Additionally, 540 nm light was used to map surface blood vessels used for alignment.
Biocytin was then injected into a specific site in V1 and the animal was sacrificed 16
hours later. Slices of V1 were imaged to locate the biocytin bouton and the surface
blood vessels. The blood vessel information was then used to align the orientation
preference maps with the bouton images giving overlaid information on the underlying
connectivity from the injection site on the animal. The original experiment used a total
of ten cases.
* (lateral) we show here one result of Bosking
which overlay over a map or orientation selectivity the network of lateral connectivity
originating froma group of neurons with similar orientations and position. There is
a structure in this connectivity towards locality (more pronounced for site B) +
connecting iso orientations even on long ranges (A). This type of structure tends
to wire together those neurons that have similar orientations, indicating a prior
to colinearities.
*(colin) ... Overall, a typical assumption that the role of lateral interactions is to
enhance the activity of neurons which are collinear : it is the so-called
**association field** formalized in Field 93, as was for instance modeled neurally in
the work from P. Series or in this version for computer vision
* (physio) is there a match of these structures with the statistics of natural images?
2) Some authors (Kisvarday, 1997, Chavane and Monier) even say it is weak or
inexistent on a the scale of the area... 1: Hunt & Goodhill have reinterpreted above data and shown that there is more diversity
than that -
* TRANSITION : my goal here will be to tackle this problem at different levels:
""")
#Jens Kremkow, Laurent U Perrinet, Cyril Monier, Jose-Manuel Alonso, Ad Aertsen, Yves Fregnac, Guillaume S Masson. Push-pull receptive field organization and synaptic depression: Mechanisms for reliably encoding naturalistic stimuli in V1, URL URL2 URL3 . Frontiers in Neural Circuits, 2016
jens_bib = s.content_bib("Kremkow, LP, Monier, Alonso, Aertsen, Fregnac, Masson", "2016", 'Push-pull receptive field organization and synaptic depression: Mechanisms for reliably encoding naturalistic stimuli in V1', url='http://invibe.net/LaurentPerrinet/Publications/Kremkow16')
jens_url = 'https://www.frontiersin.org/files/Articles/190318/fncir-10-00037-HTML/image_m/'
jens_url = 'figures/'
for l in ['a', 'b', '']:
s.add_slide(content=s.content_figures(
[jens_url + 'fncir-10-00037-g001' + l + '.jpg'], bgcolor="white",
title=None, embed=False, height=s.meta['height']*.8) + jens_bib,
notes="""
""")
# https://www.frontiersin.org/files/Articles/190318/fncir-10-00037-HTML/image_m/fncir-10-00037-g004.jpg
# https://www.frontiersin.org/files/Articles/190318/fncir-10-00037-HTML/image_m/fncir-10-00037-g005.jpg
s.add_slide(content=s.content_figures(
[jens_url + 'fncir-10-00037-g004.jpg', jens_url + 'fncir-10-00037-g005.jpg'], bgcolor="white", fragment=True,
title=None, embed=False, height=s.meta['height']*.8) + jens_bib,
notes="""
""")
s.close_section()
i_section += 1
###############################################################################
# ππππππππ Sparse coding ππππππππ
###############################################################################
###############################################################################
s.open_section()
title = meta['sections'][i_section]
s.add_slide_outline(i_section,
notes="""
one can go one step before the cortex and ask the same question in the retina
are the same process present ?
""")
ravelllo_bib = s.content_bib('Ravello, LP, Escobar, Palacios', '2018', 'Scientific Reports', url='https://dx.doi.org/10.1101/350330')
for si in ['2', '1', '5ac', '5dh']:
s.add_slide(content=s.content_figures(
[os.path.join(figpath_talk, 'Ravello2018_'+ si + '.png')], title=None, embed=False, height=s.meta['height']*.7)+ravelllo_bib,
notes="""
figure 3 of MS1
""")
# figpath = os.path.join(home, 'Desktop/2017-01_LACONEU/figures/')
s.add_slide(content="""
<video controls loop width=85%/>
<source type="video/mp4" src="{}">
</video>
""".format('figures/v1_tiger.mp4'), #s.embed_video(os.path.join(figpath, """.format(s.embed_video(os.path.join(figpath_talk, 'v1_tiger.mp4'))),
notes="""
same procedure with retinal filters (scale, no orientation) = sparseness
""")
droplets_bib = s.content_bib('Ravello, Escobar, Palacios, LP', '2019', 'in prep', url=None)
s.add_slide(content=s.content_figures(
['figures/Droplets_1.png'], fragment=True, transpose=True,
title=None, embed=False, height=s.meta['height']*.8)+droplets_bib,
notes="""
figure 1 of droplets
""")
ols_bib = s.content_bib("Olshausen and Field", "1997", 'Sparse coding with an overcomplete basis set: A strategy employed by V1?')
for i in [2]:
s.add_slide(content=s.content_figures(
[os.path.join(figpath_talk, 'Olshausen_'+ str(i) + '.png')], bgcolor="white",
title=None, embed=False, height=s.meta['height']*.85) + ols_bib,
notes="""
since we assume the retina would invert this model, let's use the forward model
to generate stimuli = droplets
""")
figpath = os.path.join(home, 'pool/science/RetinaCloudsSparse/2015-11-13_droplets/2015-11-13_1310_full_files/droplets_full')
for fname in ['00012_droplets_i_sparse_3_n_sf_8.mp4', '00006_droplets_i_sparse_5_n_sf_1.mp4', ]:
s.add_slide(content="""
<video controls loop width=60%/>
<source type="video/mp4" src="{}">
</video>
""".format(s.embed_video(os.path.join(figpath, fname))),
notes="""
very sparse to very dense
""")
droplets_bib = s.content_bib('Ravello, Escobar, Palacios, LP', '2019', 'in prep', url=None)
for suffix in ['a', 'b']:
s.add_slide(content=s.content_figures(
[#os.path.join(figpath, 'retina_sparseness_droplets.png'),
os.path.join(figpath_talk, 'Droplets_3_' + suffix + '.png')], fragment=False, transpose=True,
title=None, embed=False, height=s.meta['height']*.75)+droplets_bib,
notes="""
figure 3 of droplets
""")
s.add_slide(content=s.content_figures(
['figures/Droplets_5.png'],
title=None, embed=False, height=s.meta['height']*.75)+droplets_bib,
notes="""
figure 5 of droplets
""")
s.close_section()
i_section += 1
###############################################################################
# ππππππππ Sparse Hebbian Learning - 15'' ππππππππ
###############################################################################
###############################################################################
s.open_section()
title = meta['sections'][i_section]
s.add_slide_outline(i_section,
notes="""
let's move to the second part : learning
the main goal of Olshasuen was not only sparse coding but
the fact that using the sparse code, a simple linear hebbian learning allows
to separate independent sources
""")
s.add_slide(content="""
<video controls loop width=60%/>
<source type="video/mp4" src="{}">
</video>
""".format('figures/ssc.mp4')) #s.embed_video(os.path.join(figpath, s.embed_video(os.path.join('figures', 'ssc.mp4'))))
#
figpath = os.path.join(home, 'science/ABC/HULK/')
for suffix in ['map', 'HAP']:
s.add_slide(content=s.content_figures(
[os.path.join(figpath, 'figure_' + suffix + '.png')], bgcolor="black",
title=None, height=s.meta['height']*.75),
notes="""
a contribution we made to this algorithm is homeostasis
""")
CNN_ref = '(from <a href="http://cs231n.github.io/convolutional-networks/">http://cs231n.github.io/convolutional-networks/</a>)'
s.add_slide(content=s.content_figures(
['http://cs231n.github.io/assets/cnn/depthcol.jpeg'], bgcolor="black",
title=None, embed=False, height=s.meta['height']*.85) + CNN_ref,
notes="""
this can be extended to a convolutional neural networks
""")
for suffix in ['CNN']:
s.add_slide(content=s.content_figures(
[os.path.join(figpath, 'figure_' + suffix + '.png')], bgcolor="black",
title=None, height=s.meta['height']*.85),
notes="""
discussion...
""")
for suffix in ['1', '2a', '2b']:
s.add_slide(content=s.content_figures(
[os.path.join(figpath_talk, 'SDPC_' + suffix + '.png')], bgcolor="black",
title=None, embed=False, height=s.meta['height']*.85),
notes="""
Multi-layered unsupervised Learning
""")
s.add_slide(content=s.content_figures(
[os.path.join(figpath_talk, 'SDPC_' + suffix + '.png') for suffix in ['3', '4']],
bgcolor="black", fragment=True,
title=None, embed=False, height=s.meta['height']*.75),
notes="""
allows for a better classification as here for MNIST digits
""")
s.close_section()
###############################################################################
# ππππππππ OUTRO - 5'' ππππππππ
###############################################################################
###############################################################################
s.open_section()
s.add_slide(content=intro,
notes="""
* Thanks for your attention!
""")
s.close_section()
if slides_filename is None:
with open("/tmp/wiki.txt", "w") as text_file:
text_file.write("""\
#acl All:read
= {title} =
What:: talk @ the [[{conference_url}|{conference}]]
Who:: {author}
When:: {DD}/{MM}/{YYYY}
Where:: {location}
Slides:: https://laurentperrinet.github.io/{tag}
Code:: https://github.com/laurentperrinet/{tag}/
== reference ==
{{{{{{
#!bibtex
@inproceedings{{{tag},
Author = "{author}",
Booktitle = "{conference}, {location}",
Title = "{title}",
Url = "{url}",
Year = "{YYYY}",
}}
}}}}}}
## add an horizontal rule to end the include
{wiki_extras}
""".format(**meta))
else:
s.compile(filename=slides_filename)
# Check-list:
# -----------
#
# * (before) bring miniDVI adaptors, AC plug, remote, pointer
# * (avoid distractions) turn off airport, screen-saver, mobile, sound, ... other running applications...
# * (VP) open monitor preferences / calibrate / title page
# * (timer) start up timer
# * (look) @ audience
#
# Preparing Effective Presentations
# ---------------------------------
#
# Clear Purpose - An effective image should have a main point and not be just a collection of available data. If the central theme of the image isn't identified readily, improve the paper by revising or deleting the image.
#
# Readily Understood - The main point should catch the attention of the audience immediately. When trying to figure out the image, audience members aren't fully paying attention to the speaker - try to minimize this.
#
# Simple Format - With a simple, uncluttered format, the image is easy to design and directs audience attention to the main point.
#
# Free of Nonessential Information - If information doesn't directly support the main point of the image, reserve this content for questions.
#
# Digestible - Excess information can confuse the audience. With an average of seven images in a 10-minute paper, roughly one minute is available per image. Restrict information to what is extemporaneously explainable to the uninitiated in the allowed length of time - reading prepared text quickly is a poor substitute for editing.
#
# Unified - An image is most effective when information is organized around a single central theme and tells a unified story.
#
# Graphic Format - In graphs, qualitative relationships are emphasized at the expense of precise numerical values, while in tables, the reverse is true. If a qualitative statement, such as "Flow rate increased markedly immediately after stimulation," is the main point of the image, the purpose is better served with a graphic format. A good place for detailed, tabular data is in an image or two held in reserve in case of questions.
#
# Designed for the Current Oral Paper - Avoid complex data tables irrelevant to the current paper. The audience cares about evidence and conclusions directly related to the subject of the paper - not how much work was done.
#
# Experimental - There is no time in a 10-minute paper to teach standard technology. Unless the paper directly examines this technology, only mention what is necessary to develop the theme.
#
# Visual Contrast - Contrasts in brightness and tone between illustrations and backgrounds improves legibility. The best color combinations include white letters on medium blue, or black on yellow. Never use black letters on a dark background. Many people are red/green color blind - avoid using red and green next to each other.
#
# Integrated with Verbal Text - Images should support the verbal text and not merely display numbers. Conversely, verbal text should lay a proper foundation for each image. As each image is shown, give the audience a brief opportunity to become oriented before proceeding. If you will refer to the same image several times during your presentation, duplicate images.
#
# Clear Train of Thought - Ideas developed in the paper and supported by the images should flow smoothly in a logical sequence, without wandering to irrelevant asides or bogging down in detail. Everything presented verbally or visually should have a clear role supporting the paper's central thesis.
#
# Rights to Use Material - Before using any text, image, or other material, make sure that you have the rights to use it. Complex laws and social rules govern how much of someone's work you can reproduce in a presentation. Ignorance is no defense. Check that you are not infringing on copyright or other laws or on the customs of academic discourse when using material.
#
# http://pne.people.si.umich.edu/PDF/howtotalk.pdf
#