
Recherche avancée
Médias (3)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (100)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (10376)
-
avcodec/nvenc : De-compensate aspect ratio compensation of DVD-like content.
28 janvier 2015, par Philip Langdaleavcodec/nvenc : De-compensate aspect ratio compensation of DVD-like content.
For reasons we are not privy to, nvidia decided that the nvenc encoder
should apply aspect ratio compensation to ’DVD like’ content, assuming that
the content is not bt.601 compliant, but needs to be bt.601 compliant. In
this context, that means that they make the following, questionable,
assumptions :1) If the input dimensions are 720x480 or 720x576, assume the content has
an active area of 704x480 or 704x576.2) Assume that whatever the input sample aspect ratio is, it does not account
for the difference between ’physical’ and ’active’ dimensions.From, these assumptions, they then conclude that they can ’help’, by adjusting
the sample aspect ratio by a factor of 45/44. And indeed, if you wanted to
display only the 704 wide active area with the same aspect ratio as the full
720 wide image - this would be the correct adjustment factor, but what if you
don’t ? And more importantly, what if you’re used to ffmpeg not making this kind
of adjustment at encode time - because none of the other encoders do this !And, what if you had already accounted for bt.601 and your input had the
correct attributes ? Well, it’s going to apply the compensation anyway !
So, if you take some content, and feed it through nvenc repeatedly, it
will keep scaling the aspect ratio every time, stretching your video out
more and more and more.So, clearly, regardless of whether you want to apply bt.601 aspect ratio
adjustments or not, this is not the way to do it. With any other ffmpeg
encoder, you would do it as part of defining your input paramters or
do the adjustment at playback time, and there’s no reason by nvenc
should be any different.This change adds some logic to undo the compensation that nvenc would
otherwise do.nvidia engineers have told us that they will work to make this
compensation mechanism optional in a future release of the nvenc
SDK. At that point, we can adapt accordingly.Signed-off-by : Philip Langdale <philipl@overt.org>
Reviewed-by : Timo Rothenpieler <timo@rothenpieler.org>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at> -
VB.Net Stream/Port Image To FFmpgeg using Standard Input
31 janvier 2015, par Zakir_SZHhope all active member of this community are fine.
in my project i need to convert several image to video file.
Currently i have to first save the images to PC then use ffmpeg to read that images from the pc and convert it to movie.But rather than save image to pc, i want to send image directly to ffmepg using StandardInput call. Ffmped does support that. but i don’t know how to use that.
i have search internet a lot and all i found is getting output/response from ffmpeg using standarderror as ffmepg write to standerror instead of standardouput.
below link something similar to what i look for that that code don’t work and also that use mp3 file instead of image/graphics.
http://tiku.io/questions/3075356/convert-wma-stream-to-mp3-stream-with-c-sharp-and-ffmpeg
can any one help me out, how to send image from picturebox/graphics object to ffmpeg one by one ?
thanks in advance
best regards
-
Use ffmpeg to stream live content to azure media services
9 juin 2016, par DadicoolI’ve been trying to stream content to azure media services using ffmpeg as it’s one of the options described here : http://azure.microsoft.com/blog/2014/09/18/azure-media-services-rtmp-support-and-live-encoders/
My command is :
ffmpeg -v verbose -i 300.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7
I have made sure that the streaming endpoint has one active streaming unit, I also made sure that the channel is actually Ready and I even get it to start streaming (which makes a PublishURL available).
When I execute the ffmpeg command to start streaming, I keep getting the following error :
ffmpeg version 2.5.2 Copyright (c) 2000-2014 the FFmpeg developers
built on Dec 30 2014 11:31:18 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --arch=x86_64 --enable-runtime-cpudetect
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Routing option strict to both codec and muxer layer
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] overread end of atom 'colr' by 1 bytes
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] stream 0, timescale not set
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] max_analyze_duration 5000000 reached at 5003637 microseconds
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '300.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42isomavc1
creation_time : 2014-01-11 05:39:32
genre : Trailer
artist : Warner Bros.
title : 300: Rise of an Empire - Trailer 2
encoder : HandBrake 0.9.9 2013051800
date : 2014
Duration: 00:02:33.24, start: 0.000000, bitrate: 7377 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 (1920x1088), 7219 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default)
Metadata:
creation_time : 2014-01-11 05:39:32
encoder : JVT/AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 157 kb/s (default)
Metadata:
creation_time : 2014-01-11 05:39:32
Stream #0:2: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 101x150 [SAR 72:72 DAR 101:150], 90k tbr, 90k tbn, 90k tbc
rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7: Input/output errorThe Azure blog post clearly states that this should be possible but I can’t find a working example anywhere.
Environment :
- MacOS Maverick
- FFMPEG installed from official build
- 300.mp4 : 1080p trailer of the latest 300 movie