
Recherche avancée
Autres articles (69)
-
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.
Sur d’autres sites (6689)
-
Decoding a h264 (High) stream with OpenCV's ffmpeg on Ubuntu
9 janvier 2017, par arvidsI am working with a video stream (no audio) from an ip camera on Ubuntu 14.04. Also i am a beginner with Ubuntu and everything on it. Everything was going great with a camera that has these parameters (from FFMPEG) :
Input #0, rtsp, from 'rtsp://*private*:8900/live.sdp': 0B f=0/0
Metadata:
title : RTSP server
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 352x192, 29.97 tbr, 90k tbn, 180k tbcBut then i changed to a newer camera, which has these parameters :
Input #0, rtsp, from 'rtsp://*private*/media/video2':0B f=0/0
Metadata:
title : VCP IPC Realtime stream
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbcMy C++ program uses OpenCV3 to process the stream. By default OpenCV uses ffmpeg to decode and display the stream with function VideoCapture.
VideoCapture vc;
vc.open(input_stream);
while ((vc >> frame), !frame.empty()) {
*do work*
}With the new camera stream i get errors like these (from ffmpeg) :
[h264 @ 0x7c6980] cabac decode of qscale diff failed at 41 38
[h264 @ 0x7c6980] error while decoding MB 41 38, bytestream (3572)
[h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 44
[h264 @ 0x7c6980] error while decoding MB 0 44, bytestream (4933)
[h264 @ 0x7bc2c0] SEI type 25 truncated at 208
[h264 @ 0x7bfaa0] SEI type 25 truncated at 206
[h264 @ 0x7c6980] left block unavailable for requested intra mode at 0 18
[h264 @ 0x7c6980] error while decoding MB 0 18, bytestream (14717)The image sometimes is glitched, sometimes completely frozen. After a few seconds to a few minutes the stream freezes completely without an error. However on vlc it plays perfectly. I installed the newest version (3.2.2) of ffmpeg player with
./configure --enable-gpl --enable-libx264
Now playing directly with ffplay (instead of launching from source code with OpenCV function VideoCapture), the stream plays better, doesn’t freeze, but sometimes still displays warnings :
[NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1
[h264 @ 0x7f834c0d5d20] SEI type 25 size 896 truncated at 319=1/1
[rtsp @ 0x7f834c0008c0] max delay reached. need to consume packet
[rtsp @ 0x7f834c0008c0] RTP: missed 1 packets
[h264 @ 0x7f834c094740] concealing 675 DC, 675 AC, 675 MV errors in P frame
[NULL @ 0x7f834c008c00] SEI type 25 size 896 truncated at 320=1/1Changing the camera hardware is not an option. The camera can be set to encode to h265 or mjpeg. When encoding to mjpeg it can output 5 fps, which is not enough. Decoding to a static video is not an option either, because i need to display real time results about the stream. Here is a list of API backends that can be used in function VideoCapture. Maybe i should swithc to some other decoder and player ?
From my research i conclude that i have these options :-
Somehow get OpenCV to use the ffmpeg player from another directory, where it is compiled with libx264
-
Somehow get OpenCV to use libVlc instead of ffmpeg
One example of switching to vlc is here, but i don’t understand it well enough to say if that is what i need. Or maybe i should be parsing the stream in code ? I don’t rule out that this could be some basic problem due to a lack of dependencies, because, as i said, i’m a beginner with Ubuntu.
- Use vlc to preprocess the stream, as suggested here.
This is probably slow, which again is bad for real time results.
Any suggestions and coments will be appreciated. -
-
x86 : AVX2 high bit-depth load_deinterleave_chroma
18 janvier 2017, par Henrik Gramner -
Converting AAC stream to DASH MP4 with high fragment length precision
5 mars 2017, par vdudouytFor my HTML5 project I need to create a fragmented MP4 file with a single audio stream (no video), each fragment of which has a duration of exactly 0.1 second.
Accordingly to ffmpeg docs, you can accomplish that by passing a value in microseconds with ’-frag_duration’ - which I found to be working and playable with HTML5 MediaSource API :
$ ffmpeg -y -i input.aac -c:a libfdk_aac -b:a 64k -level:v 13 -r 25 -strict experimental -movflags empty_moov+default_base_moof -frag_duration 100000 output.mp4
As we have a 210 second audio split up by 0.1s fragments, I expect that in output.mp4 we’d have 2100 fragments, hence 2100 moof atoms. But, upon inspecting it I’ve figured out that we only have 1811 moof atoms - which means that some (or maybe even all) fragments are bigger than expected :
$ python ~/git/mp4viewer/src/showboxes.py output.mp4 |grep moof|wc -l
1811Could anybody tell me what’s wrong, and how could I accomplish what I want ?
Right now my assumption is that during an encoding I have AAC frame length which is not a multiple of 0.1s, hence ffmpeg has no chance to produce the fragments that are strictly equal to 0.1s but I’m not sure. If somebody can confirm that - and let me know a way to explicitly set AAC frame_size in FFMPEG (I couldn’t find anything like that in the docs), or completely disprove this - it would be also highly appreciated.