-
Notifications
You must be signed in to change notification settings - Fork 0
/
feed.xml
639 lines (606 loc) · 47.2 KB
/
feed.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Toto do stuff</title>
<link href="https://totetmatt.github.io/feed.xml" rel="self" />
<link href="https://totetmatt.github.io" />
<updated>2023-07-08T13:39:45+02:00</updated>
<author>
<name>Totetmatt</name>
</author>
<id>https://totetmatt.github.io</id>
<entry>
<title>Twitter Streaming importer and the New Twitter</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/twitter-streaming-importer-and-the-new-twitter.html"/>
<id>https://totetmatt.github.io/twitter-streaming-importer-and-the-new-twitter.html</id>
<updated>2023-07-08T13:39:45+02:00</updated>
<summary>
<![CDATA[
Due to the current situation on Twitter API, a FLOSS Developer strike will be observed for the Gephi’s Twitter Streaming Importer plugin. This will stay…
]]>
</summary>
<content type="html">
<![CDATA[
<p>Due to the current situation on Twitter API, a FLOSS Developer strike will be observed for the Gephi’s Twitter Streaming Importer plugin. This will stay like that until there is a positive change on the API and the platform in general.</p>
<ul>
<li>No more development, update, maintenance, support.</li>
<li>Code itself to transform tweet to graph is not restricted. The API access is fully on the responsability of the Twitter company, and I can’t do things about it. Check with them.</li>
<li>Plugin still available and should works if you have a working API Access.</li>
<li>Code is open source and you are free to adapt it on new acquisition method.</li>
</ul>
<p>[Update from 08-07-2023]</p>
<p>It’s getting worst so the strike still on. Few comments :</p>
<ul>
<li>Looks like the Access Policy has changed a lot. No guarantee that it still works on the plugin. Gephi and the Twitter Streaming Importer are <strong>completely independant and unrelated</strong> to Twitter company. Which mean even if you payed the access to the API, there is no guarantee from the Gephi team and the Twitter Streaming Importer team at all that the Plugin will work.</li>
<li>Code still open source, you are <em>libre</em> to read it and adapt it as long as it respect the open source licence of the Twitter Streaming Importer.</li>
</ul>
]]>
</content>
</entry>
<entry>
<title>Network Graph rendering : Isopleths with Gmic</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/network-graph-rendering-isopleths-with-gmic.html"/>
<id>https://totetmatt.github.io/network-graph-rendering-isopleths-with-gmic.html</id>
<category term="rendering"/>
<category term="map"/>
<category term="graph"/>
<category term="gmic"/>
<category term="Gephi"/>
<updated>2023-02-26T14:36:52+01:00</updated>
<summary>
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/51/output-2.png" alt="" />
Mathieu Jacomy is currently experimenting a type of graph rendering using a technic called “Hillshading”, a demo is accessible here https://observablehq.com/d/7d19c2d05caf9fb2 . The idea of…
]]>
</summary>
<content type="html">
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/51/output-2.png" alt="" />
<p>Mathieu Jacomy is currently experimenting a type of graph rendering using a technic called “Hillshading”, a demo is accessible here <a href="https://observablehq.com/d/7d19c2d05caf9fb2">https://observablehq.com/d/7d19c2d05caf9fb2</a> .
The idea of this concept is to add information to enhance the readability of the graph, especially when there is condensed clusters of nodes.</p>
<p>The current script is working mostly in javascript with D3.js. I wanted to find a way to do it without any JS, locally in my computer. Then I remember a wonderful tool called <a href="https://gmic.eu/">GMIC</a> that can do a lot of advanced image processing.</p>
<p>To simplify the things, we are going to remove the shading part of the hillshading, which on my opinion isn’t the critical part of the entire process described by Mathieu Jacomy.
At the end, hillshading are just shaded <a href="https://en.wikipedia.org/wiki/Contour_line#Isopleths">Isopleths</a> , the line you see in a map that indicate the altitude. I should be enough for the scope of the script we want to acheive.</p>
<p>To use the script we need to have 2 applications :</p>
<ul>
<li><a href="https://gmic.eu/">GMIC</a>, we need the CLI version of the tool. It’s available for Windows and Linux.</li>
<li><a href="https://gephi.org/">Gephi</a> , for generating graph.</li>
</ul>
<h2 id="export-network-graph">Export Network Graph</h2>
<p>Take any network graph you have, the final effect will wokr better on graph that has a lot of clusters.</p>
<p>When you’re happy with your spatialisation in preview, we need to export in PNG 2 files with the following configuration:</p>
<ul>
<li><code>background.png</code> where only the Nodes are rendered, use the Preview Settings to remove the node labels,the edges and the edge labels. Export this file with a certains option : 4096x4096, <strong>no</strong> transparant background, 0% margin</li>
<li><code>foreground.png</code> where you can render the node and the edges, (label might be rendered, but might occur small issue later, I will come back). Export this file with a certains option : 4096x4096, <strong>transparant background</strong>, 0% margin.</li>
</ul>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/background.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/background-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/background-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/background-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/background-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/background-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/background-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure> Background
<figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/foreground.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/foreground-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/foreground-2xl.png 1600w" alt="Foreground" width="4096" height="4096" /></figure> Foreground</p>
<h2 id="processing-with-gmic">Processing with Gmic</h2>
<p>Let’s concider you exported the png in the same directory, and <code>gmic</code> is accessible on your terminal, run this command : </p>
<pre><code>gmic.exe background.png fx_stamp[-1] 1,100,0,30,0,1,1,0,50,50 fx_channel_processing[-1] 0,0,0,1.22,2,0,100,256,0,0,0,2,0,0,50,50 samj_Colored_Outlines[-1] 0,0,16,0,2,0,0,0,255 fx_channel_processing[-1] 0,0,100,0,0,0,100,256,0,1,0,2,0,0,50,50 output[-1] intermediate.png
</code></pre>
<p>Quick explanation : </p>
<ul>
<li><strong>fx_stamp</strong>: Convert the image to black and white and reverse it.</li>
<li><strong>fx_channel_processing</strong>: Apply a blur, this somehow simulate a Kernel Density Estimation. To simplify, the blur processing tries to genreate a density approximation on every point of the map.</li>
<li><strong>samj_Colored_Outlines</strong>: Create the isopleths. We could vugalirse saying it’s a quantization of the discrete density aproximation computed on the previous step.</li>
<li><strong>fx_channel_processing</strong>: Convert to black and white image</li>
<li><strong>output</strong>: Save the image to <code>intermediate.png</code></li>
</ul>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/intermediate-3.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/intermediate-3-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure> Intermediate</p>
<p>Then run this script :</p>
<pre><code>gmic.exe intermediate.png foreground.png +channels[-1] 100% +image[0] [1],0%,0%,0,0,1,[2],255 output[-1] output.png
</code></pre>
<p>Here the script only compile the <code>intermediate.png</code> and <code>foreground.png</code> as one final image.</p>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/output.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/output-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/output-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/output-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/output-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/output-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/output-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure></p>
<h2 id="comments">Comments</h2>
<p>The process is still experimental, a lot of things may vary like the export size that asks to do reparametrization of the script.
There is also some limitation due to current behaviour of gephi. If the foreground exports with node label, it might generate a image not aligned with the background which break the global effect at the end.</p>
<p>Out of that, the effect works well with networkt that has a certain critical mass of node density.
Having the edges on the foreground hides a little bit the iso lines.</p>
<h2 id="some-other-experiments">Some other experiments</h2>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/world_border_final.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/world_border_final-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/world_border_final-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure></p>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/miserable_final.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/miserable_final-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/miserable_final-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure></p>
<p><figure class="post__image"><img loading="lazy" src="https://totetmatt.github.io/media/posts/51/rfc_final-2.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-xs.png 300w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-sm.png 480w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-md.png 768w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-lg.png 1024w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-xl.png 1360w ,https://totetmatt.github.io/media/posts/51/responsive/rfc_final-2-2xl.png 1600w" alt="Image description" width="4096" height="4096" /></figure></p>
]]>
</content>
</entry>
<entry>
<title>Gephi's Twitter Streaming Importer V2 is Out !</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/gephis-twitter-streaming-importer-v2-is-out.html"/>
<id>https://totetmatt.github.io/gephis-twitter-streaming-importer-v2-is-out.html</id>
<category term="twitter"/>
<category term="graph"/>
<category term="Real-time"/>
<updated>2022-06-24T19:47:05+02:00</updated>
<summary>
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/50/KodeLife-2022-05-29-at-20.57.32-0000_2.png" alt="" />
Important notes : The old version of the plugin is deprecated, the latest version will be the 1.4.4 and then won’t be updated anymore. The…
]]>
</summary>
<content type="html">
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/50/KodeLife-2022-05-29-at-20.57.32-0000_2.png" alt="" />
<p><strong>Important notes</strong> : <em>The old version of the plugin is deprecated, the latest version will be the 1.4.4 and then won’t be updated anymore.</em></p>
<h1 id="why-a-v2-">Why a V2 ?</h1>
<p>The old version of the plugin is using the Twitter Streaming API v1 which is currently getting deprecated by twitter, with the consequence that most of the new users of the plugin are getting the famous “HTTP 403” error and can’t get the plugin working.</p>
<p>The Twitter Streaming Importer V2 is now using the new Twitter API v2. You still need to have a developer account and an application that can use the V2 version of the API (that should be the nominal case now). </p>
<h1 id="what-changes-">What changes ?</h1>
<h2 id="bearer-token">Bearer Token</h2>
<p>The old V1 and the new V2 API are slightly different, so you will need to reconfigure the credentials configuration inside the plugin. Instead of the Access API / Token set of credentials, you now only need the Bearer Token that you can generate and get on your Twitter application account.</p>
<h2 id="query-rules">Query Rules</h2>
<p>The “query” is now fully handled by Twitter. The .json file to save your query won’t be backported as there is a fundamentally different approach now to querying the stream on twitter api. Please read how to create rules on the official twitter documentation about filtered streams <a href="https://developer.twitter.com/en/docs/twitter-api/tweets/filtered-stream/integrate/build-a-rule">https://developer.twitter.com/en/docs/twitter-api/tweets/filtered-stream/integrate/build-a-rule</a> .</p>
<p>This new way of building rules has multiple advantages,
The rules are saved and bound to your application / Bearer Token, which means it will stay if you close Gephi and re-open it.</p>
<p>You can add and remove rules without restarting the running stream.
You can have multiple rules. These rules can be flagged with a ‘tag’.
The plugin will use these rules tags to create new columns on the nodes so you can check what rules the entity is matching.</p>
<p>Again, as this mechanism is purely controlled by twitter api, please read the official doc for more detailed information.</p>
<h2 id="other-details">Other details</h2>
<p>The api v2 has also changed how twitter retrieves the information. Moreover the plugin has to migrate from using Twitter4J library to official twitter-api-java-sdk (<a href="https://github.com/twitterdev/twitter-api-java-sdk">https://github.com/twitterdev/twitter-api-java-sdk</a> ) .</p>
<p>These changes implied rewriting the networklogic to support the new way data is getting gathered. Fortunately it was not that hard to do the rewriting and was also a chance to review some of the logic to fix some bugs. The networklogic should react mostly the same way as on the old version.</p>
<p>During the rewriting, some minor optimisation has been done, notably the issue that the creation of the entities shouldn’t be now too much behind when using Force Atlas.</p>
]]>
</content>
</entry>
<entry>
<title>How to Capture your Bonzomatic with FFmpeg</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/how-to-capture-your-bonzomatic-with-ffmpeg.html"/>
<id>https://totetmatt.github.io/how-to-capture-your-bonzomatic-with-ffmpeg.html</id>
<category term="glsl"/>
<category term="ffmpeg"/>
<category term="bonzomatic"/>
<updated>2021-06-05T10:18:00+02:00</updated>
<summary>
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/47/NlXGDH.jpg" alt="le Mandel Dube" />
Got to work on this website https://psenough.github.io/shader_summary/ that try to gather all graphical live coding events performed in the past. Basically, for people that don’t…
]]>
</summary>
<content type="html">
<![CDATA[
<img src="https://totetmatt.github.io/media/posts/47/NlXGDH.jpg" alt="le Mandel Dube" />
<p>Got to work on this website <a href="https://psenough.github.io/shader_summary/">https://psenough.github.io/shader_summary/</a> that try to gather all graphical live coding events performed in the past.</p>
<p>Basically, for people that don’t know, it’s live coding performance done, sometime as a competition, sometime as a jam, where folks create real time graphics stuff.</p>
<p>One of the common tool used is <a href="https://github.com/TheNuSan/Bonzomatic/releases/tag/v11">Bonzomatic</a> , it’s a simple application that use OpenGL to render a rectangle that fit to the application screen and then you can live edit the Framgment Shader that determine what should be the color of the pixel.</p>
<p>Problem was we got a lot of entries but no preview images. Which is quite sad for a graphics discipline.</p>
<p>After spending an afternoon coding into bonzomatic to find a way to export the bufferframe to an image (was almost here, I think I was missing some color format alignment) I thought about maybe a simpler solution using the best tool ever : FFmpeg.</p>
<p>If we look at the website, there is a way (in Windows at least) to capture an application windows (<a href="https://trac.ffmpeg.org/wiki/Capture/Desktop">https://trac.ffmpeg.org/wiki/Capture/Desktop</a>) .</p>
<p>So using this <code>gdigrab</code> format and using the window’s name, you can capture the input like this :</p>
<pre><code class="language-bash">ffmpeg -f gdigrab -i 'title=BONZOMATIC - GLFW' -vframes 1 -q:v 2 -y snapshot.jpg
</code></pre>
<p>Some notes :</p>
<ul>
<li>It will also capture the mouse if it’s inside, so be careful (maybe an option)</li>
<li>If you don’t use fullscreen, it will capture only the “content” of the window, not the menu bar. Which mean that if you maximise, the output resolution will be the screen resolution minus the menu bar + other window frame.</li>
<li>You might want to add a <code>-s 1</code> before the input to let the application start and / or let ffmpeg get warm before starting a record</li>
</ul>
<p>Of course now you can also export as video.
Here is an example of a <code>ffmpeg</code> command that render 10 seconds to mp4 :</p>
<pre><code class="language-bash">ffmpeg -ss 1 -t 10 -y -framerate 60 -f gdigrab -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i 'title=BONZOMATIC - GLFW' -c:a copy -c:v h264_nvenc -tune hq -b:v 20M -bufsize 20M -maxrate 50M -qmin 0 -g 250 -bf 3 -b_ref_mode middle -temporal-aq 1 -rc-lookahead 20 -i_qfactor 0.75 -b_qfactor 1.1 out.mp4
# (I copy pasted some config + blind tweak. Can't really explain the options but was happy with result)
</code></pre>
<p>I’m using <code>nvidia_env</code> as it’s much faster for encoding and avoid issues I got with the normal libx264. </p>
<p>Need to check more in detail for Sound capture at the same time and see if there is different input format to play with.</p>
]]>
</content>
</entry>
<entry>
<title>Twitch and FFmpeg and Youtube-dl: Fetch from live stream to local file</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/twitch-and-ffmpeg-with-some-youtube-dl-help-fetch-from-live-stream-to-local-file.html"/>
<id>https://totetmatt.github.io/twitch-and-ffmpeg-with-some-youtube-dl-help-fetch-from-live-stream-to-local-file.html</id>
<updated>2020-10-13T20:04:57+02:00</updated>
<summary>
<![CDATA[
(Using Windows PowerShell, adapt for UNIX bash shouldn’t be a big issue) So something nice with youtube-dl is like you can ask not to download…
]]>
</summary>
<content type="html">
<![CDATA[
<p><em>(Using Windows PowerShell, adapt for UNIX bash shouldn’t be a big issue)</em> </p>
<h1 id="record-a-live-stream">Record a live stream</h1>
<p>So something nice with youtube-dl is like you can ask not to download the media but to fetch for the media link :</p>
<pre><code>>> youtube-dl -g http://something
youtube-dl -g https://www.youtube.com/watch?v=RJt01u4yrLQ
https://r2---sn-h0jeened.googlevideo.com/videoplayback?expire=1[...]
</code></pre>
<p>If you use that for a twitch channel that is streaming live, it returns you the HLS stream.</p>
<pre><code>>> youtube-dl -g https://www.twitch.tv/farore_de_firone
https://video-weaver.ber01.hls.ttvnw.net/v1/playlist/CpkEQusnrcdffNI3[..]MA2c4.m3u8
</code></pre>
<p>It by default binds to the best quality video, but you can check and select all format available using <code>-F</code></p>
<pre><code>>> youtube-dl -F https://www.twitch.tv/farore_de_firone
[twitch:stream] farore_de_firone: Downloading stream GraphQL
[twitch:stream] farore_de_firone: Downloading access token JSON
[twitch:stream] 39653517372: Downloading m3u8 information
[info] Available formats for 39653517372:
format code extension resolution note
audio_only mp4 audio only 2k , mp4a.40.2
160p mp4 284x160 230k , avc1.4D401F, 30.0fps, mp4a.40.2
360p mp4 640x360 630k , avc1.4D401F, 30.0fps, mp4a.40.2
480p mp4 852x480 1262k , avc1.4D401F, 30.0fps, mp4a.40.2
720p60 mp4 1280x720 3257k , avc1.4D401F, 60.0fps, mp4a.40.2
1080p60__source_ mp4 1920x1080 6713k , avc1.64002A, 60.0fps, mp4a.40.2 (best)
</code></pre>
<p><em>You could even only have the audio-stream</em></p>
<p>And to select it </p>
<pre><code>>> youtube-dl -f 160p -g https://www.twitch.tv/farore_de_firone
</code></pre>
<p>So with this link, you can use ffmpeg to record localy the stream to your computer (and have your own reaplay / VOD without the “disagreement” of Twitch VOD :) )</p>
<pre><code>>> ffmpeg -i "$(youtube-dl -f 720p60 -g https://www.twitch.tv/farore_de_firone)" -c copy stream.20201012.mp4
</code></pre>
<p>And here it’s quite simple “dump” of the running script. Nothing prevent you to add some filters, reencoding that adapt to your needs.</p>
<h1 id="mixing-multiple-stream">Mixing multiple stream</h1>
<p>Let’s have some fun, there is some streamers that plays together on the same game. Usually, you can watch their POV at the sametime with the <strong>Twitch Squad</strong> mechanism or <strong>Multitwitch</strong> application. But would it be possible to record a file in such way ? </p>
<p>Actually yes, ffmpeg can take multiple video input and transform it on the fly via the filter complex to render all the video on the same stream.</p>
<p>There is a nice topic on Stackoverflow that explain how to simply stack multiple video : <a href="https://stackoverflow.com/questions/11552565/vertically-or-horizontally-stack-mosaic-several-videos-using-ffmpeg">https://stackoverflow.com/questions/11552565/vertically-or-horizontally-stack-mosaic-several-videos-using-ffmpeg</a></p>
<p>Example merging 2 videos :</p>
<pre><code>>> ffmpeg -i "$(youtube-dl -g https://www.twitch.tv/antoinedaniellive)" \
-i "$(youtube-dl -g https://www.twitch.tv/soon)" \
-filter_complex vstack=inputs=2 \
-map 0:a \
output.mp4
</code></pre>
<p>Example merging 4 videos : </p>
<pre><code>>> ffmpeg -i "$(youtube-dl -f 160p -g https://www.twitch.tv/antoinedaniellive)" \
-i "$(youtube-dl -f 160p -g https://www.twitch.tv/soon)" \
-i "$(youtube-dl -f 160p -g https://www.twitch.tv/angledroit )" \
-i "$(youtube-dl -f 160p -g https://www.twitch.tv/etoiles)" \
-filter_complex "[0:v][1:v][2:v][3:v]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0[v]" -map "[v]" \
-map 0:a \
-y output.mp4
</code></pre>
<p><em>Note</em> : it’s better to stack video vertically. Horizontal stack does works but then some services (like twitter) won’t accept the video because the image ratio will be too extreme.</p>
<p>The <code>-map 0:a </code> here is necessary to select which audio you want to have.</p>
<p>The format mkv also allow to record multiple video stream within one file that you can selected after that : </p>
<pre><code>>> ffmpeg -i "$(youtube-dl -g https://www.twitch.tv/alphacast)" \
-i "$(youtube-dl -g https://www.twitch.tv/colas_bim)" \
-i "$(youtube-dl -g https://www.twitch.tv/eventisfr)" \
-i "$(youtube-dl -g https://www.twitch.tv/fusiow)" \
-map 0:1 -map 1:1 -map 2:1 -map 3:1 -map 0:0 \
-c copy \
out.mkv
</code></pre>
]]>
</content>
</entry>
<entry>
<title>Extract chapters from Youtube Media</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/extract-chapters-youtube-media.html"/>
<id>https://totetmatt.github.io/extract-chapters-youtube-media.html</id>
<category term="youtube"/>
<category term="ffmpeg"/>
<category term="bash"/>
<updated>2020-06-26T10:39:25+02:00</updated>
<summary>
<![CDATA[
Youtube recently got this “chapter” concept where it fragment a long video with chapters. I think this data might be parsed from the description of…
]]>
</summary>
<content type="html">
<![CDATA[
<p>Youtube recently got this “chapter” concept where it fragment a long video with chapters. I think this data might be parsed from the description of the video done, as they already parse any timestamp available for a while now.</p>
<p>Thanks to youtube-dl, we can download thena video and the metadata which now contains this chapter data. </p>
<pre><code class="language-bash">$ youtube-dl --write-info-json -x --audio-format mp3 https://www.youtube.com/watch?v=HZTStHzWRxM
[youtube] HZTStHzWRxM: Downloading webpage
[info] Writing video description metadata as JSON to: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.info.json
[download] Destination: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.webm
[download] 100% of 3.22MiB in 00:00
[ffmpeg] Destination: The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3
Deleting original file The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.webm (pass -k to keep)
</code></pre>
<p>We will use <a href="https://www.youtube.com/watch?v=HZTStHzWRxM">https://www.youtube.com/watch?v=HZTStHzWRxM</a> as example.</p>
<p>The command above will download the video file, transcode it to mp3 and also download the metadata in a json format. We have now 2 files :</p>
<ul>
<li><code>The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.info.json</code> that contains data </li>
<li><code>The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3</code> that is the media</li>
</ul>
<p><code>jq</code> is a wonderful command line to manipulate json on bash. We can for example get the title of the video like this :</p>
<pre><code class="language-bash">$ cat The\ New\ Youtube\ Chapter\ Timestamp\ Feature-HZTStHzWRxM.info.json | jq -r .title | sed -e 's/[^A-Za-z0-9._-]/_/g'
The_New_Youtube_Chapter_Timestamp_Feature
</code></pre>
<p>The <code>sed</code> here is to make sure we won’t have special characters that might lead to some error later.</p>
<p>The <code>-r</code> on <code>jq</code> indicate to return “raw text”. By default, <code>jq</code> will use some syntax colorization and keep some sepcial character that might leads to some issue. </p>
<p>If available, Youtube-dl info json contains a <code>chapters</code> array that contain all the chapters with their <code>start_time</code> , <code>end_time</code> and <code>title</code> .</p>
<pre><code class="language-bash">$ cat The\ New\ Youtube\ Chapter\ Timestamp\ Feature-HZTStHzWRxM.info.json |\
jq -r '.chapters[]'
{
"start_time": 0,
"end_time": 17,
"title": "The new feature"
}
{
"start_time": 17,
"end_time": 76,
"title": "Slow roll-out"
}
{
"start_time": 76,
"end_time": 124,
"title": "How it works"
}
{
"start_time": 124,
"end_time": 180,
"title": "Problems / suggestions for the future"
}
</code></pre>
<p>The idea now is to use each dict entry here as parameters for <code>ffmpeg</code> to split the media according to the chapters data. As we are in bash, current json representation will be quite hard to use it like that, so we need to transform a little bit the representation here to use the output of <code>jq</code> in a pipe and in <code>xargs</code>.</p>
<p>What also we need to take into consideration is that <code>ffmpeg</code> can split a media by giving the option <code>-ss</code> to know where to start and <code>-t</code> to know the <strong>duration</strong> of the cut, <strong>not the end time</strong>. As the information on the json gives us a start and end time, we need to perfom a simple substraction to have the start time and the duration.</p>
<pre><code class="language-bash">$ cat The\ New\ Youtube\ Chapter\ Timestamp\ Feature-HZTStHzWRxM.info.json |\
jq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\
sed 's/"//g'
0
17
The new feature
17
59
Slow roll-out
76
48
How it works
124
56
Problems / suggestions for the future
</code></pre>
<p>Thanks to <code>jq</code>, we can perfom simple math operation directly on the command to compute the duration. <code>sed</code> here again is only for cleaning up special characters.</p>
<p>Now, we can pipe the wonderful <code>xargs</code> to use the output as parameter and trigger a <code>ffmpeg</code> command</p>
<pre><code class="language-bash">$ cat The\ New\ Youtube\ Chapter\ Timestamp\ Feature-HZTStHzWRxM.info.json|\
jq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\
sed -e 's/[^A-Za-z0-9._-]/_/g' |\
xargs -n3 -t -d'\n' sh -c 'ffmpeg -y -ss $0 -i "The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3" -t $1 -codec:a copy "$2.mp3"'
</code></pre>
<ul>
<li><code>-n3</code> indicate to take parameters 3 by 3*</li>
<li><code>-t</code> is only to debug as it will print each command <code>xargs</code> will execute</li>
<li><code>-d'\n'</code> indicate that parameters are separated by <code>\n</code></li>
</ul>
<p>What is cool is that we could potentially parallelize the process here by adding to <code>xargs</code> the parameter <code>-P X</code> to run the multiple <code>ffmpeg</code> invokation in parallel.</p>
<p>On <code>ffmpeg</code> side, nothing tremendous : </p>
<ul>
<li><code>-ss</code> and <code>-t</code> has been already explain as start time and duration,</li>
<li><code>-codec:a copy</code> indicate that we keep everything same as the original file in terms of codec, so no re-encoding for the output file, which means it’s going fast </li>
<li><code>-y</code> to avoid prompt and force override of existing output file</li>
</ul>
<p>That works quite well. It might be possible to fully one line it, but let’s put a proper script to ease the usage of this.</p>
<pre><code class="language-bash">#!/bin/sh
set -x
#Download media + metadata
youtube-dl --write-info-json -x --audio-format mp3 -o "tmp_out.%(ext)s" $1
# Maybe a way to get the file name from previous function
INFO="tmp_out.info.json"
AUDIO="tmp_out.mp3"
echo :: $INFO $AUDIO ::
# Fetch the title
TITLE=$(cat "$INFO" | jq -r .title | sed -e 's/[^A-Za-z0-9._-]/_/g' )
# ^--- Remove all weird character as we want to use it as filename
# We will put all chapter into a directory
mkdir "$TITLE"
# Chapterization
cat "$INFO" |\
jq -r '.chapters[] | .start_time,.end_time-.start_time,.title ' |\
sed -e 's/[^A-Za-z0-9._-]/_/g' |\
xargs -n3 -t -d'\n' sh -c "ffmpeg -y -ss \$0 -i \"$AUDIO\" -to \$1 -codec:a copy -f mp3 \"$TITLE/\$2.mp3\""
#Remove tmp file
rm tmp_out*
</code></pre>
<p>The script file here : <a href="https://gist.github.com/totetmatt/b4bf50c62642e5a9e1bf6365a47e19c6">https://gist.github.com/totetmatt/b4bf50c62642e5a9e1bf6365a47e19c6</a></p>
<p>No big change on the global approach just something to becareful : Yes, there is a hell quote escape game to play and it might not be pleasant ….</p>
<p>To explain the last part, as far as I understand it, the string will be evaluated multiple time : </p>
<ul>
<li>First time will be at “script level”, so it will replace any <code>$VARIABLE</code> present in the script like <code>$AUDIO</code> and <code>$TITLE</code> </li>
<li>Second time will be at <code>xargs / sh -c</code> invokation where then it’s possible to use <code>$0 $1 and $2</code>. But if we don’t escape it first, theses variables will be evaluated at the first round, that’s why we need to backslash it <code>\$0, \$1, \$2</code>.</li>
</ul>
<p>You can see the result of the string after the 1st evaluation thanks to the <code>-t</code> option of <code>xargs</code> : </p>
<pre><code class="language-bash">sh -c 'ffmpeg -y -ss $0 -i "The New Youtube Chapter Timestamp Feature-HZTStHzWRxM.mp3" -to $1 -codec:a copy -f mp3 "The_New_Youtube_Chapter_Timestamp_Feature/$2.mp3"' 124 56 Problems___suggestions_for_the_future
</code></pre>
<p>There might be other and better way to deal wih the args parsing, the string escape and the string cleanup, but current solution works enough :)</p>
]]>
</content>
</entry>
<entry>
<title>Bash Sort</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/bash-sort-2.html"/>
<id>https://totetmatt.github.io/bash-sort-2.html</id>
<category term="bash"/>
<updated>2020-06-20T11:05:49+02:00</updated>
<summary>
<![CDATA[
From time to time, we got some code challenge / codekata at work. The concept is simple it’s just some simple problem to sovles so…
]]>
</summary>
<content type="html">
<![CDATA[
<p>From time to time, we got some code challenge / codekata at work. The concept is simple it’s just some simple problem to sovles so that we can share. I usualy do one version in a language like <strong>Scala</strong>, <strong>Python</strong> or other more or less fancy language like <strong>C++</strong> or even <strong>Haskell</strong> but I also try to have a <strong>Bash</strong> version, with extra point if I can oneline it.</p>
<p>Here it was about sorting Star Wars movies from their story chronological order starting from a list with the release order.</p>
<pre><code class="language-bash">## Generation of files ##
cat > movies.txt <<EOL
A New Hope (1977)
The Empire Strikes Back (1980)
Return of the Jedi (1983)
The Phantom Menace (1999)
Attack of the Clones (2002)
Revenge of the Sith (2005)
The Force Awakens (2015)
Rogue One: A Star Wars Story (2016)
The Last Jedi (2017)
Solo: A Star Wars Story (2018)
EOL
cat > order.txt <<EOL
4
5
6
10
8
1
2
3
7
9
EOL
cat > expect.txt <<EOL
The Phantom Menace (1999)
Attack of the Clones (2002)
Revenge of the Sith (2005)
Solo: A Star Wars Story (2018)
Rogue One: A Star Wars Story (2016)
A New Hope (1977)
The Empire Strikes Back (1980)
Return of the Jedi (1983)
The Force Awakens (2015)
The Last Jedi (2017)
EOL
</code></pre>
<p>Each line of <code>order.txt</code> tells which line of <code>movies.txt</code> to substitute so we can’t just <code>paste order.txt movies.txt | sort -n</code> because we need to find a way to extract the <strong>nth</strong> line of the <code>movies.txt</code> </p>
<p>To do that, we can use <code>sed -n "Np" file</code>, <code>N</code> is the line number to get, <code>p</code> is the sed command to “print”. <code>-n</code> is needed as by default, sed will print the file anyway.</p>
<pre><code class="language-bash">totetmatt$ sed -n 10p movies.txt
Solo: A Star Wars Story (2018)
</code></pre>
<p>We can then wire this with the output of the <code>order.txt</code> via a pipeline and a <code>xargs</code>. Let’s also add a <code>tee</code> at the end so it stores the result in a file while keeping the possibilites to use the output for other operation.</p>
<pre><code class="language-bash">cat order.txt | xargs -I % sed -n "%p" movies.txt | tee result.txt
The Phantom Menace (1999)
Attack of the Clones (2002)
Revenge of the Sith (2005)
Solo: A Star Wars Story (2018)
Rogue One: A Star Wars Story (2016)
A New Hope (1977)
The Empire Strikes Back (1980)
Return of the Jedi (1983)
The Force Awakens (2015)
The Last Jedi (2017)
</code></pre>
<p>We could add some automated check to be sure the operation works as expected. Good solution will be to use <code>diff result.txt expect.txt</code> as we have 2 files generated. But let’s say that we don’t have the <code>result.txt</code> , only the command output and still want to use the <code>diff</code> that only accept file as input.</p>
<p>We can then use <code><(command)</code> to concider the whole command output as an input file for <code>diff</code>. </p>
<pre><code class="language-bash">diff <(cat order.txt | xargs -I % sed -n "%p" movies.txt | tee result.txt) expect.txt || echo "No ok"
</code></pre>
<p><a href="https://gist.github.com/totetmatt/2b4c74eb214fcc6d04ffcd39bdbd43ad">https://gist.github.com/totetmatt/2b4c74eb214fcc6d04ffcd39bdbd43ad</a></p>
]]>
</content>
</entry>
<entry>
<title>FFMpeg</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/ffmpeg.html"/>
<id>https://totetmatt.github.io/ffmpeg.html</id>
<category term="video"/>
<category term="ffmpeg"/>
<updated>2020-06-20T01:38:40+02:00</updated>
<summary>
<![CDATA[
Some findings and command line I'm using regularly with ffmpeg. http://www.astro-electronic.de/FFmpeg_Book.pdf https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/ ffmpeg -i inputd.mp4 -filter_complex "[0:v] fps=12,scale=w=480:h=-1,split [a][b];[a] palettegen=stats_mode=single [p];[b][p] paletteuse=new=1" output.gif https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse ffmpeg…
]]>
</summary>
<content type="html">
<![CDATA[
<p>Some findings and command line I'm using regularly with ffmpeg.</p>
<h2>General User Doc</h2>
<p><a href="http://www.astro-electronic.de/FFmpeg_Book.pdf">http://www.astro-electronic.de/FFmpeg_Book.pdf</a></p>
<p> </p>
<h2>Create gif</h2>
<p><a href="https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/">https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/</a></p>
<p><code>ffmpeg -i inputd.mp4 -filter_complex "[0:v] fps=12,scale=w=480:h=-1,split [a][b];[a] palettegen=stats_mode=single [p];[b][p] paletteuse=new=1" output.gif</code></p>
<h2>Timelapse</h2>
<p><a href="https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse">https://superuser.com/questions/777938/ffmpeg-convert-a-video-to-a-timelapse</a></p>
<p><code>ffmpeg -i input.mp4 -filter:v "setpts=0.5*PTS" -an output.mp4</code></p>
<p><a href="http://mahugh.com/2015/04/29/creating-time-lapse-videos/">http://mahugh.com/2015/04/29/creating-time-lapse-videos/</a></p>
<p><a href="http://social.d-e.gr/techblog/posts/12-smoother-timelapses-ffmpeg">http://social.d-e.gr/techblog/posts/12-smoother-timelapses-ffmpeg</a></p>
<p><code>ffmpeg -i input -vf "tblend=average,framestep=2,tblend=average,framestep=2,tblend=average,framestep=2,tblend=average,framestep=2,setpts=0.25*PTS" -r 96 -b:v 30M -crf 10 -an output</code></p>
]]>
</content>
</entry>
<entry>
<title>Keras and Gephi : Visualize your Deep Learning Graph</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/keras-and-gephi-visualize-your-deep-learning-graph.html"/>
<id>https://totetmatt.github.io/keras-and-gephi-visualize-your-deep-learning-graph.html</id>
<updated>2020-06-20T00:35:15+02:00</updated>
<summary>
<![CDATA[
If you work on Machine Learning / Deep Learning with Keras, you can export the model in a dot file. And guess what ? Gephi…
]]>
</summary>
<content type="html">
<![CDATA[
<p>If you work on Machine Learning / Deep Learning with Keras, you can export the model in a dot file. And guess what ? Gephi can read dot files ! :D</p>
<p>To do that use this code (adapt it for your usecase)</p>
<p><a href="https://gist.github.com/totetmatt/dcc85d27b0fdfd79513cbe43201f507f">https://gist.github.com/totetmatt/dcc85d27b0fdfd79513cbe43201f507f</a></p>
<pre>from keras.applications import *
from keras.utils import plot_model
# [..]
# model = ...
# Get your own model here
# [..]
model = NASNetMobile() #Example with NASNetMobile
plot_model(model,show_shapes=False, to_file='model.dot')</pre>
<p>Then it will generate a <em>model.dot</em> file that you can open directly into Gephi !</p>
<figure class="alignnone size-medium wp-image-569"><img loading="lazy" src="https://totetmatt.github.io/media/posts/40/screenshot_111609-300x225.png" sizes="(max-width: 48em) 100vw, 768px" srcset="https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-xs.png 300w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-sm.png 480w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-md.png 768w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-lg.png 1024w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-xl.png 1360w ,https://totetmatt.github.io/media/posts/40/responsive/screenshot_111609-300x225-2xl.png 1600w" alt="" width="300" height="225" /></figure>
]]>
</content>
</entry>
<entry>
<title>Twitter Streaming Importer : Naoyun as a Gephi Plugin</title>
<author>
<name>Totetmatt</name>
</author>
<link href="https://totetmatt.github.io/twitter-streaming-importer-naoyun-as-a-gephi-plugin.html"/>
<id>https://totetmatt.github.io/twitter-streaming-importer-naoyun-as-a-gephi-plugin.html</id>
<category term="Gephi"/>
<updated>2020-06-20T00:35:15+02:00</updated>
<summary>
<![CDATA[
Hello everybody ! Great news today, almost a 5 years acheivement : Twitter Streaming Importer is out. It uses the Twitter Stream API to get…
]]>
</summary>
<content type="html">
<![CDATA[
<p>Hello everybody !</p>
<p>Great news today, almost a 5 years acheivement : Twitter Streaming Importer is out.</p>
<p>It uses the Twitter Stream API to get current tweets and display them as a graph in realtime in gephi.</p>
<p>Is basically a simple version of Naoyun embeeded to Gephi, which will be easier to use for everybody I hope.</p>
<p>It embeed the 3 main network logic with little update on it :</p>
<ul>
<li><strong>User Network</strong> : Still a User to User network, but with Gephi 0.9 , we can now have parallel edges, which means now this network logic will differentiate a "Retweet" and a "Mention". Moreover, each reference with update the weight of the edges.</li>
<li><strong>Smart Full network</strong> : Creates a full graph of a tweet activity.</li>
<li><strong>Hashtag Network</strong> : Doing a graph based only on hashtags.</li>
</ul>
<p>Just download it from Gephi, in <strong>Tools > Plugin </strong>and follow the steps.</p>
<p>You will need to have a twitter account and to create a dummy application here <a href="https://apps.twitter.com/">https://apps.twitter.com/</a> to use the plugin.</p>
<h2>What's on the pipe for next version of the plugin</h2>
<ul>
<li><strong>Enhance the data</strong> : For the moment, only the "label" is used, in the future it should be possible to have all the metadata from a tweet , a user etc....</li>
<li><strong>Twitter API Key</strong> : It's a persistant problem , the model of Twitter to access their API isn't design for Open source desktop project. It need to check with the Gephi guys how it would be possible to ease the Key registration for the User.</li>
<li><strong>Custom Network Logic</strong> : It's technically possible today to have your own Network logic used in the Plugin. The process and the way to do it just need to be reviewed.</li>
<li>Access to the <strong>sample stream api</strong></li>
<li>Adding possibility to track per<strong> Localisation</strong></li>
</ul>
<h2>It's not the end for Naoyun</h2>
<p>Naoyun won't die and will keep some specificities that could not be transfered to the Gephi Plugin. But for the moment, the developpement is slowed down due to some dependencies issues and permanent refactoring.</p>
]]>
</content>
</entry>
</feed>