
Recherche avancée
Autres articles (97)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6031)
-
HEVC/H.265 interlaced format support in ffmpeg or VLC
30 décembre 2020, par Ernestas Gruodis"Music Box Russia" channel over satellite transmits in HEVC 1920x1080 25fps interlaced - and after recording VLC recognizes file as 50 fps, and resolution 1920x540 - half a height. But on satellite tuner the player works fine - it plays a file as 1920x1080 25fps... When we can expect support for HEVC/H.265 interlaced ? Here is recorded file (Garry Grey & Eva Miller - wtf). Also - a lot of lost frames in VLC player statistics..


EDIT :


I found some interesting info how in HEVC the interlace video content can be indicated here :




Unlike to H.264/AVC, interlace-dedicated coding in HEVC is not exist :


- 

- No mixed frame-field interaction (like PAFF in H.264/AVC)
- No interlace scanning of transform coefficients
- No correction MVX[1] (or y-component of MV) if current and reference pictures are in different polarity (top-bottom or
bottom-top).








However, in HEVC the interlace video content can be indicated
(signaled in VPS/SPS and
pic_timing
SEI messages the latter are
transmitted for every picture in the sequence). Interlace-related
setting :

- 

-
in VPS/SPS set
general_interlaced_source_flag=1
andgeneral_progressive_source_flag=0
. Indeed, the HEVC standard says :

if
general_progressive_source_flag
is equal to0
and
general_interlaced_source_flag
is equal to1
, the source scan type of
the pictures in the CVS should be interpreted as interlaced only.

-
in VPS/SPS set
general_frame_only_constraint_flag=0


-
in SPS VUI set
field_seq_flag=1
andframe_field_info_present_flag=1
. Notice that if these flags are ON
then picture timing SEIs shall be present for each picture.

-
transmission of Picture Timing SEI per picture with the following parameters :


source_scan_type = 0
to indicate interlace mode
for top field picture signalpict_struct=1
and for bottom field picturepict_struct=2














Perhaps it is possible to pass these parameters to ffmpeg/vlc before playing a file ?


-
Array Getting Lost While Passing From C++ to C Using Callback [duplicate]
23 décembre 2020, par Abhishek SharmaI am trying to write Video Using FFmpeg by generating frame at run time using direct3d and the frames are generated using sharp dx at c# and I use window runtime to callback to c# to generate Frame And return Platform::Array of Byte ;


so for writing video using FFmpeg, I used C code that writes video, and to ask for generating frame I implemented a callback to generate a frame and all that in StaticLib


uint8_t*(*genrate_frame_callback)(int) = NULL; 



now in C-File, I call fill_image to get the frame and write to the video


static void fill_image(int frame_index, int width, int height)
{
 int x, y, i;

 i = frame_index;
 auto result = genrate_frame_callback(frame_index);// after passing this point while using debugger reults single element not even array
 .
 .
 .
 code to write video
 
}



now when I call to Write Video before that pass this function to the callback that is in the c++ file in a Window Runtime Component that reference to static lib


uint8_t* genrate_frame(int args)
{
 auto frame = FireGenrateFramet(args); // returns Platform::Array<byte>
 vector v(frame->begin(), frame->end());
 return v.data();// data is abilabe to this point 
}
</byte>


now the result variable contains a single element
I am new to c++ and c and unable to understand why is data not passed to the function using call back


Edit :


then can you help me with how to pass Data I tried using the global Scope Variable c++ file too but still,
it gets lost,
but after introducing another call back to read data stored in global Variable it read the whole data correctly


vector frame_v;
uint8_t* genrate_frame(int args)
{
 auto frame = FireGenrateFrame(args);
 vector v(frame->begin(), frame->end());
 frame_v = v;
 return v.data();// this loose the Same 
}

uint8_t read_pixal(int args)
{
 return frame_v[args];// where as it read out correctly
}



but I don't want to store and add new call back an read from their just pass the array


-
Piping FFmpeg output to Unity texture
1er février 2021, par SincressI'm working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.



Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.



Server side FFmpeg process is started with
process.StartInfo.Arguments = @" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4";
currently and client side withprocess.StartInfo.Arguments = @" -y -i test.qsv.mp4 output.png";



Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).



A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.



Please help me : which format should I use (and how should it be used) ?