
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (40)
-
Les statuts des instances de mutualisation
13 mars 2010, parPour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8511)
-
Cannot get MediaStream from Electron app to browser via Openvidu media server
11 juin 2021, par foxy.bunnyI'm currently having an issue with openvidu-browser-2.17.0.js


I'm trying to transfer a camera stream from RTSP to an Openvidu media server deployed on cloud and get the stream back on the browser.


The RTSP stream was converted into an HLS stream using FFmpeg and played using hls.js. It was captured in an video tag in HTML and I used the HTMLMediaElement.captureStream().getVideoTrack() to generate MediaStreamTrack and passed it into the videoSource property in initPublisher. This publisher part was wrapped in electron. Then, the stream was connected to our cloud Openvidu server deployed on-premises exactly like in the Docs (https://docs.openvidu.io/en/2.18.0/deployment/deploying-on-premises/). The subscriber part is a simple HTML page displayed on the browser to get the stream from the cloud server. The subscriber is responsible for creating session and generating stream from the media server when streamCreated event occurs.


The whole workflow worked nicely when we tested it with our webcam, however, when we use MediaStreamTrack of our stream video instead of webcam, the subscriber part only showed the blank video.


My question is :


1 : Is it possible to stream an MediaStream to Openvidu media server like that ?


2 : If yes, then what am I doing wrong here ?


Describe the bug


Cannot get MediaStream from Electron app to browser via Openvidu media server.


Expected behavior


Receive the stream from the Electron app to browser via Openvidu media server.


Wrong current behavior


Only get blank video


Client device info


- 

- Chrome Version 91.0.4472.77 (Official Build) (64-bit) on Windows 10 Version 10.0.19042 Build 19042




-
Right way to use vmaf with ffmpeg
20 mai 2021, par dravitI am trying to calculate the VMAF score of a processed video wrt the original file.


Command I have used :


ffmpeg -y -loglevel info -stats -i original.mp4 -i processed.mp4 -lavfi "[0]null[refdeint];[refdeint]scale=1920:1080:flags=neighbor[ref];[1]setpts=PTS+0.0/TB[b];[b]scale=1920:1080:flags=neighbor[c];[c][ref]libvmaf=log_fmt=json:phone_model=1:model_path={model_path_here}/vmaf_v0.6.1.json:n_subsample=1:log_path=log.json" -f null -



Now as per the official documentation of
vmaf with ffmpeg
found here, it sayssource/reference
file followed by theencoded/distorted/processed
file.

But almost all of the blogs I came across, they are using the other way round order of the args, i.e.
processed
file followed by theoriginal
file.

Few examples :


- 

-
https://medium.com/@eyevinntechnology/keep-an-eye-on-the-video-quality-b9bcb58dd5a1 : search for "Using VMAF within FFMPEG" in it.


-
https://websites.fraunhofer.de/video-dev/calculating-vmaf-and-psnr-with-ffmpeg/ : search for "Metric Calculation with FFmpeg" in it.








EDIT


NOTE : Changing the order does change the VMAF score.


-
-
FFmpeg decode OGG Vorbis audio as float, how to get float data from decoder
19 novembre 2020, par cs guyI am trying to decode a OGG Vorbis audio file with the following :


// if opening input failed
 if (avformat_open_input(&pFormatContext, filePath.c_str(), av_find_input_format("ogg"),
 nullptr) != 0) {
 
 LOGE("FFmpegExtractor can not open the file! %s", filePath.c_str());
 return FAILED_TO_LOAD;
 }

 durationInMillis = pFormatContext->duration / AV_TIME_BASE * secondsToMilli;
 LOGD("FFmpegExtractor opened the file, Duration: %llu", durationInMillis);

 avformat_find_stream_info(pFormatContext, nullptr);

 LOGD("pFormatContext streams num: %ui", pFormatContext->nb_streams);

 int sendPacketResult = 0;
 int receiveFrameResult = 0;

 for (int i = 0; i < pFormatContext->nb_streams; i++) {
 AVCodecParameters *pLocalCodecParameters = pFormatContext->streams[i]->codecpar;

 if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 avCodecContext = avcodec_alloc_context3(avCodec);
 if (!avCodecContext) {
 
 LOGE("FFmpegExtractor avcodec_alloc_context3 failed!");
 return FAILED_TO_LOAD;
 }

 if (avcodec_parameters_to_context(avCodecContext, pLocalCodecParameters) < 0) {
 
 LOGE("FFmpegExtractor avcodec_parameters_to_context failed!");
 return FAILED_TO_LOAD;
 }

 if (avcodec_open2(avCodecContext, avCodec, nullptr) < 0) {
 
 LOGE("FFmpegExtractor avcodec_open2 failed!");
 return FAILED_TO_LOAD;
 }
 }

 while (av_read_frame(pFormatContext, avPacket) >= 0) {
 
 sendPacketResult = avcodec_send_packet(avCodecContext, avPacket);

 
 receiveFrameResult = avcodec_receive_frame(avCodecContext, avFrame);

 // TODO: Get frames PCM data from the decoder, 
 }
 }



I am stuck on the last part. I want to get the decoded data of the current frame as a float between -1 to +1 for all the channels the audio has but I have no idea how to. I looked at the official documents but I could not understand them. How can I get the data from the decoder on the line




// TODO : Get frames PCM data from the decoder,