
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (59)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8197)
-
fileapi::WriteFile() doesn't send an input, if processthreadsapi::STARTUPINFO::hStdError is set (ffmpeg)
23 avril 2021, par LidekysI'm trying to capture my screen using
ffmpeg
in a different thread (which I create usingprocessthreadsapi::CreateProcess())
so I'd be able to do something else in the main thread, and redirectffmpeg
output, so it wouldn't pop up in the console for the user to see. To stop filming, I send a 'q' input usingWriteFile()
, and after that I want to saveffmpeg
accumulated output usingReadFile()
.

However, if I set
STARTUPINFO::hStdError
(note, thatffmpeg
output goes tostderr
) to a pipe, from which I could read the accumulated data, the inputs I send usingWriteFile()
are no longer registered and ffmpeg.exe keeps running.

I've tried redirecting
ffmpeg
output in a simple command line, but I can still stop the process by pressing the q button.

Also, if I record for less than 8 seconds, the input is registered and
ffmpeg.exe
closes.

Is there something wrong with my code, or is it processthreadsapi issue, any hints will be kindly appreciared !


Here's a minimal code of how I am trying to do it :



#include <iostream>

#include 
#include 

using namespace std;

HANDLE g_hChildStd_IN_Rd = NULL;
HANDLE g_hChildStd_IN_Wr = NULL;
HANDLE g_hChildStd_OUT_Rd = NULL;
HANDLE g_hChildStd_OUT_Wr = NULL;

int main()
{
 //Create IN and OUT pipes
 SECURITY_ATTRIBUTES saAttr;
 saAttr.nLength = sizeof(SECURITY_ATTRIBUTES);
 saAttr.lpSecurityDescriptor = NULL;


 if (! CreatePipe(&g_hChildStd_OUT_Rd, &g_hChildStd_OUT_Wr, &saAttr, 0) )
 cout<<"StdoutRd CreatePipe error"</Start recording
 if(!CreateProcess(NULL,
 "ffmpeg -y -f gdigrab -framerate 2 -i desktop record.avi", // command line
 NULL, // process security attributes
 NULL, // primary thread security attributes
 TRUE, // handles are inherited
 0, // creation flags
 NULL, // use parent's environment
 NULL, // use parent's current directory
 &siStartInfo, // STARTUPINFO pointer
 &piProcInfo)) // receives PROCESS_INFORMATION
 {
 cout<<"Error create process"</Record for a while
 while(getch() != 'k'){
 cout<<"While press k"</Stop recording by emulating a Q button push
 DWORD dwWritten;
 CHAR chBufW[1] = {'q'};

 if ( ! WriteFile(g_hChildStd_IN_Wr, chBufW, 1, &dwWritten, NULL) )
 cout<<"Error write file"</Save stdError (ffmpeg) data
 DWORD dwRead;
 char stdErrorData[4096];
 bool bSuccess;

 bSuccess = ReadFile( g_hChildStd_OUT_Wr, stdErrorData, 4096, &dwRead, NULL);

 if(!bSuccess || dwRead == 0)
 cout<<"Read failed"<code></iostream>


-
avformat/mov : fix seeking with HEVC open GOP files
18 février 2022, par Clément Bœschavformat/mov : fix seeking with HEVC open GOP files
This was tested with medias recorded from an iPhone XR and an iPhone 13.
Here is how a typical stream looks like in coding order :
┌────────┬─────┬─────┬──────────┐
│ sample | PTS | DTS | keyframe |
├────────┼─────┼─────┼──────────┤
┊ ┊ ┊ ┊ ┊
│ 53 │ 560 │ 510 │ No │
│ 54 │ 540 │ 520 │ No │
│ 55 │ 530 │ 530 │ No │
│ 56 │ 550 │ 540 │ No │
│ 57 │ 600 │ 550 │ Yes │
│ * 58 │ 580 │ 560 │ No │
│ * 59 │ 570 │ 570 │ No │
│ * 60 │ 590 │ 580 │ No │
│ 61 │ 640 │ 590 │ No │
│ 62 │ 620 │ 600 │ No │
┊ ┊ ┊ ┊ ┊In composition/display order :
┌────────┬─────┬─────┬──────────┐
│ sample | PTS | DTS | keyframe |
├────────┼─────┼─────┼──────────┤
┊ ┊ ┊ ┊ ┊
│ 55 │ 530 │ 530 │ No │
│ 54 │ 540 │ 520 │ No │
│ 56 │ 550 │ 540 │ No │
│ 53 │ 560 │ 510 │ No │
│ * 59 │ 570 │ 570 │ No │
│ * 58 │ 580 │ 560 │ No │
│ * 60 │ 590 │ 580 │ No │
│ 57 │ 600 │ 550 │ Yes │
│ 63 │ 610 │ 610 │ No │
│ 62 │ 620 │ 600 │ No │
┊ ┊ ┊ ┊ ┊Sample/frame 58, 59 and 60 are B-frames which actually depends on the
key frame (57). Here the key frame is not an IDR but a "CRA" (Clean
Random Access).Initially, I thought I could rely on the sdtp box (independent and
disposable samples), but unfortunately :sdtp[54] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[55] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[56] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[57] is_leading:0 sample_depends_on:2 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[58] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[59] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[60] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
sdtp[61] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
sdtp[62] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0The information that might have been useful here would have been
is_leading, but all the samples are set to 0 so this was unusable.Instead, we need to rely on sgpd/sbgp tables. In my case the video track
contained 3 sgpd tables with the following grouping types : tscl, sync
and tsas. In the sync table we have the following 2 entries (only) :sgpd.sync[1] : sync nal_unit_type:0x14
sgpd.sync[2] : sync nal_unit_type:0x15(The count starts at 1 because 0 carries the undefined semantic, we'll
see that later in the reference table).The NAL unit types presented here correspond to :
libavcodec/hevc.h : HEVC_NAL_IDR_N_LP = 20,
libavcodec/hevc.h : HEVC_NAL_CRA_NUT = 21,In parallel, the sbgp sync table contains the following :
┌────┬───────┬─────┐
│ id │ count │ gdi │
├────┼───────┼─────┤
│ 0 │ 1 │ 1 │
│ 1 │ 56 │ 0 │
│ 2 │ 1 │ 2 │
│ 3 │ 59 │ 0 │
│ 4 │ 1 │ 2 │
│ 5 │ 59 │ 0 │
│ 6 │ 1 │ 2 │
│ 7 │ 59 │ 0 │
│ 8 │ 1 │ 2 │
│ 9 │ 59 │ 0 │
│ 10 │ 1 │ 2 │
│ 11 │ 11 │ 0 │
└────┴───────┴─────┘The gdi column (group description index) directly refers to the index in
the sgpd.sync table. This means the first frame is an IDR, then we have
batches of undefined frames interlaced with CRA frames. No IDR ever
appears again (tried on a 30+ seconds sample).With that information, we can build an heuristic using the presentation
order.A few things needed to be introduced in this commit :
1. min_sample_duration is extracted from the stts : we need the minimal
step between sample in order to PTS-step backward to a valid point
2. In order to avoid a loop over the ctts table systematically during a
seek, we build an expanded list of sample offsets which will be used
to translate from DTS to PTS
3. An open_key_samples index to keep track of all the non-IDR key
frames ; for now it only supports HEVC CRA frames. We should probably
add BLA frames as well, but I don't have any sample so I prefered to
leave that for laterIt is entirely possible I missed something obvious in my approach, but I
couldn't come up with a better solution. Also, as mentioned in the diff,
we could optimize is_open_key_sample(), but the linear scaling overhead
should be fine for now since it only happens in seek events.Fixing this issue prevents sending broken packets to the decoder. With
FFmpeg hevc decoder the frames are skipped, with VideoToolbox the frames
are glitching. -
Call to avformat_find_stream_info prevents decoding of simple PNG image ?
10 avril 2014, par kloffyI am running into a problem decoding a simple PNG image with libav. The
decode_ok
flag after the call toavcodec_decode_video2
is set to0
, even though the packet contains the entire image. Through some experimentation, I have managed to pinpoint the issue and it seems related to callingavformat_find_stream_info
. If the call is removed, the example runs successfully. However, I would like to use the same code for other media, and callingavformat_find_stream_info
is recommended in the documentation.The following minimal example illustrates the behavior (unfortunately still a bit lengthy) :
#include <iostream>
extern "C"
{
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
}
// Nothing to see here, it's just a helper function
AVCodecContext* open(AVMediaType mediaType, AVFormatContext* formatContext)
{
auto ret = 0;
if ((ret = av_find_best_stream(formatContext, mediaType, -1, -1, nullptr, 0)) < 0)
{
std::cerr << "Failed to find video stream." << std::endl;
return nullptr;
}
auto codecContext = formatContext->streams[ret]->codec;
auto codec = avcodec_find_decoder(codecContext->codec_id);
if (!codec)
{
std::cerr << "Failed to find codec." << std::endl;
return nullptr;
}
if ((ret = avcodec_open2(codecContext, codec, nullptr)) != 0)
{
std::cerr << "Failed to open codec context." << std::endl;
return nullptr;
}
return codecContext;
}
// All the interesting bits are here
int main(int argc, char* argv[])
{
auto path = "/path/to/test.png"; // Replace with valid path to PNG
auto ret = 0;
av_log_set_level(AV_LOG_DEBUG);
av_register_all();
avcodec_register_all();
auto formatContext = avformat_alloc_context();
if ((ret = avformat_open_input(&formatContext, path, NULL, NULL)) != 0)
{
std::cerr << "Failed to open input." << std::endl;
return -1;
}
av_dump_format(formatContext, 0, path, 0);
//*/ Info is successfully found, but interferes with decoding
if((ret = avformat_find_stream_info(formatContext, nullptr)) < 0)
{
std::cerr << "Failed to find stream info." << std::endl;
return -1;
}
av_dump_format(formatContext, 0, path, 0);
//*/
auto codecContext = open(AVMEDIA_TYPE_VIDEO, formatContext);
AVPacket packet;
av_init_packet(&packet);
if ((ret = av_read_frame(formatContext, &packet)) < 0)
{
std::cerr << "Failed to read frame." << std::endl;
return -1;
}
auto frame = av_frame_alloc();
auto decode_ok = 0;
if ((ret = avcodec_decode_video2(codecContext, frame, &decode_ok, &packet)) < 0 || !decode_ok)
{
std::cerr << "Failed to decode frame." << std::endl;
return -1;
}
av_frame_free(&frame);
av_free_packet(&packet);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
av_free(formatContext);
return 0;
}
</iostream>The format dump before
avformat_find_stream_info
prints :Input #0, image2, from '/path/to/test.png' : Duration : N/A, bitrate : N/A Stream #0:0, 0, 1/25 : Video : png, 25 tbn
The format dump after
avformat_find_stream_info
prints :Input #0, image2, from '/path/to/test.png' : Duration : 00:00:00.04, start : 0.000000, bitrate : N/A Stream #0:0, 1, 1/25 : Video : png, rgba, 512x512 [SAR 3780:3780 DAR 1:1], 1/25, 25 tbr, 25 tbn, 25 tbc
So it looks like the search yields potentially useful information. Can anybody shed some light on this problem ? Other image formats seem to work fine. I assume that this is a simple user error rather than a bug.
Edit : Debug logging was already enabled, but the PNG decoder does not produce a lot of output. I have also tried setting a custom logging callback.
Here is what I get without the call to
avformat_find_stream_info
, when decoding succeeds :Statistics : 52125 bytes read, 0 seeks
And here is what I get with the call to
avformat_find_stream_info
, when decoding fails :Statistics : 52125 bytes read, 0 seeks
detected 8 logical cores
The image is 52125 bytes, so the whole file is read. I am not sure what the logical cores are referring to.