
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (82)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7981)
-
ffmpeg : sws_scale not able to convert image from YUV to RGB
1er octobre 2020, par Abhinav SinghI m trying to convert my image from
YUV420P
toRGB24
below is my code for extracting the frame and decoding it,

frame extracting


while(av_read_frame(avc,avpacket) == 0){
 if(avpacket->stream_index == videoStream){
 int res = avcodec_send_packet(avctx,avpacket);
 if(res == AVERROR(EAGAIN)){
 continue;
 }
 else if(res<0){
 std::cout<<"error reading packet\n";
 break;
 }
 else{
 AVFrame* avframeRGB = av_frame_alloc();
 res = avcodec_receive_frame(avctx,avframeRGB);
 if(res == AVERROR(EAGAIN)){
 continue;
 }
 else if(res<0){
 std::cout<<"Error reading frame";
 break;
 }
 else{
 ++i;
 convertImage(&avframeRGB,avctx->pix_fmt,AV_PIX_FMT_RGB24);
 displayImage(avframeRGB);
 }
 }
 }
 }



Function convertImage


void convertImage(AVFrame** frame, AVPixelFormat srcFmt, AVPixelFormat destFmt){
 AVFrame* tframe = *frame;
 struct SwsContext* swscaler = sws_getContext(tframe->width,tframe->height,srcFmt,tframe->width,tframe->height,destFmt, SWS_BILINEAR,NULL,NULL,NULL);
 AVFrame* tmp = av_frame_alloc();
 tmp->width = tframe->width;
 tmp->height = tframe->height;
 tmp->format = destFmt;
 if(av_frame_get_buffer(tmp,32) !=0)
 return;
 int res = sws_scale(swscaler,(uint8_t const * const*)tframe->data,tframe->linesize,0,tframe->height,tmp->data,tmp->linesize);
 *frame = tmp;
}



Before calling convertImage
avframeRGB


data : {0x555555988ec0 '\020' <repeats 200="200" times="times">..., 0x555555a6f940 '\200' <repeats 200="200" times="times">..., 0x555555aa9440 '\200' <repeats 200="200" times="times">..., 0x0, 0x0, 0x0, 0x0, 0x0}
height : 720
width : 1280
</repeats></repeats></repeats>


after calling convertImage


data : {0x7fffdde19040 "", 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
height : 0
width : 0



I m not able to understand why my data is NULL and my height and width is 0 ?


ffmpeg version - 4.3.2


-
Random PIPELINE_ERROR_DECODE : video decoder reinitialization failed on Chromium HTML5 video tag
9 janvier 2024, par Hello WorldSystem


Running an Expo React Native app, with a
<webview></webview>
component that loads a NextJS app with an HTML5<video></video>
tag. The system with issues is running the app on Android 9.0 with WebView implementation set to Android System WebView 84.0. The system without issues is running the app on Android 10.0 with WebView implementation set to Android System WebView 120.0.

Details


Video files being played by the
<video></video>
element are previously transcoded by the followingfluent-ffmpeg
node package :

import ffmpeg from 'fluent-ffmpeg'

ffmpeg(rawVideoFileUrl)
 .videoCodec('libx264')
 .audioCodec('aac')
 .outputOption('-strict experimental')
 .outputOption('-movflags frag_keyframe+faststart')
 .format('mp4')



My knowledge on video transcoding is very limited, but I have to do the transcoding to guarantee that the files are compatible to be played by my NextJS web app. In my research, my findings pointed to these FFMPEG command line arguments for my purpose.


For the aforementioned WebView version, H.264 with AAC, should be compatible, yet playback is not stable. Many times it will play just fine, but other times I receive the error :


PIPELINE_ERROR_DECODE: video decoder reinitialization failed



If I refresh the web app, it works again. That leads me to believe it is somehow memory related, maybe caused by incompatibility.


I'm not sure whether
video decoder reinitialization failed
produced by the HTML5<video></video>
tag is a problem of transcoding or not, as the videos play fine most of the time and only produces that error in the console some of the times, unexpectedly.

Question


Is there something that I can do to the
fluent-ffmpeg
command to make the video files more widely compatible, including the WebView 84.0 system too, or is the issue somewhere else ?

In this case, updating WebView is not an option.


-
How would I add an audio channel to an rtsp stream ?
9 septembre 2022, par PlayerWetGood companions. It turns out that my Raspberry does not give more than itself when it comes to joining video with audio at good quality and sending it by rtsp. But I think I could send the video in rtsp format and then the audio in mp3, but I would need to join it again on another computer (nas Debian) on my home lan where I have the Shinobi program (security camera manager) installed.


I would need something that can somehow grab the rtsp stream and another mp3 audio and merge them into a new rtsp stream. Is this possible ? or is it a crazy idea.


On the one hand I send this, which is the transmission of the camera by rtsp through v4l2rtspserver :


v4l2rtspserver -H 1080 -W 1920 -F 25 -P 8555 /dev/video0



And separately I send an audio in mp3 with sound from a usb microphone through ffmpeg :


ffmpeg -ac 1 -f alsa -i hw:1,0 -acodec libmp3lame -ab 32k -ac 1 -f rtp rtp://192.168.1.77:12348



My idea is to put both things together on a nas server in a new rtsp stream (or another idea).


But I don't know if with ffmpeg I can capture the video from an rtsp video stream and then be able to join it with the mp3 audio and form another new rtsp stream.


Merge these two streams into one and reassemble an rtsp :


rtsp://192.168.1.57:8555/unicast

rtp://192.168.1.77:12348



I have tried this way but it gives me an error :


ffmpeg \
 -i rtsp://192.168.1.57:8555/unicast \
 -i rtp://192.168.1.37:12348 \
 -acodec copy -vcodec libx264 \
 -f rtp_mpegts "rtp://192.168.1.77:5000?ttl=2"



Error :


[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times
[h264 @ 0x55ac6acaf4c0] decode_slice_header error
[h264 @ 0x55ac6acaf4c0] no frame!
[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times
[h264 @ 0x55ac6acaf4c0] decode_slice_header error
[h264 @ 0x55ac6acaf4c0] no frame!
[h264 @ 0x55ac6acaf4c0] non-existing PPS 0 referenced
 Last message repeated 1 times



What am I doing wrong ?