
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (79)
-
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (7718)
-
Extract image of every frame of a video using react-native-ffmpeg
1er octobre 2020, par EdGI have looked all over the internet to get a way to extract image of everyframe of a video using react-native-ffmpeg. I am making a mobile app and I want to show all per frame images on the video timeline. I want to do this natively on mobile so that I can utilise hardware power of mobile. That is the reason I am looking for react-native-ffmpeg kind of library. Am I in the right direction ? This npmjs.com/package/react-native-ffmpeg is what I am trying to use. I need to know the command to do the job.


-
Concatenate mp4 video files on iOS using Libavformat
3 mars 2017, par FelixI need to merge (i.e. concatenate) multiple video files (using H.264 codec) in an iOS application. The problem is I can’t use AVFoundation because it can’t handle the video format on iOS (no problem on MacOS). So I tried to use the FFmpeg library, specifically Libavformat for this task. I found C code that looks useful here : http://stackoverflow.com/a/16293046. But i can’t get it to work and don’t understand it completly. The output video file appears to play only black frames. I’m using the precompiled libraries from the FFmpeg CocoaPod.
I was able to convert a single test video (which has 1GB) using the command line tool on my Mac :
$ ffmpeg -i input.mp4 -vcodec copy output.mp4
ffmpeg version 3.2.4-tessus Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzmq --enable-version3 --disable-ffplay --disable-indev=qtkit --disable-indev=x11grab_xcb
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41isomiso2
creation_time : 2016-09-14T14:35:53.000000Z
Duration: 00:04:32.29, start: 0.000000, bitrate: 30013 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 29958 kb/s, 50 fps, 50 tbr, 5k tbn, 100 tbc (default)
Metadata:
creation_time : 2016-09-14T14:35:53.000000Z
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
creation_time : 2016-09-14T14:35:53.000000Z
handler_name : SoundHandler
Output #0, mp4, to 'output.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41isomiso2
encoder : Lavf57.56.101
Stream #0:0(eng): Video: h264 (Main) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 29958 kb/s, 50 fps, 50 tbr, 10k tbn, 5k tbc (default)
Metadata:
creation_time : 2016-09-14T14:35:53.000000Z
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
creation_time : 2016-09-14T14:35:53.000000Z
handler_name : SoundHandler
encoder : Lavc57.64.101 aac
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
frame=13577 fps=1280 q=-1.0 Lsize= 997604kB time=00:04:32.28 bitrate=30013.7kbits/s speed=25.7x
video:993034kB audio:4268kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.030246%
[aac @ 0x7fa849809e00] Qavg: 942.999This process is very fast (10 seconds) and I can play the resulting video on iOS using AVFoundation frameworks. If i could do the same for each input video using Libavformat in my iOS app i could solve my problem by merging the videos in a second step using
AVAssetExportSession
.Any help is appreciated.
-
Video files are not opening for sample applications in Info Sphere Streams
10 mars 2017, par Pavan KumarI am new to IBM Info Sphere Streams, I read an article which says IBM Info Sphere streams is capable of doing Image processing. After some research I got to know that, we have to install OpenCV and FFMpeg lybraries with its dependecies. I have installed all of them and tried sample applications. I can run applications which contains Images as input, but when it comes to processing with videos it’s not working. I am unable to use x11viewer operator as well. I am getting the following error while working with sample videos.
(Streams com.ibm.streamsx.opencv::X11Viewer operator:7889): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed
28 Feb 2017 14:00:34.672 [7889] ERROR #splapptrc,J[0],P[0],vid0,spl_pe M[PEImpl.cpp:process:1270] - CDISR5079E: An exception occurred during the processing of the processing element. The error is: Unable to open camera {0}.I did not install any GPU device drivers here but when I used the following commands I am getting results like below,
[streamsadmin@streamsqse output]$ lspci | grep VGA
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
[streamsadmin@streamsqse output]$ find /dev -group video
/dev/fb0
/dev/dri/card0
/dev/dri/renderD128
/dev/dri/controlD64
/dev/agpgartand
glxinfo | grep -i vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: VMware, IncMy doubt here is, whether I have installed GPU Drivers properly or do I need to install them again ? Can anyone help me to resolve this issue.
And I am unable to open those videos by using any player as well.