
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (76)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (14865)
-
riscv : Tweak names of cpu flags, print flags in libavutil/tests/cpu
14 décembre 2023, par Martin Storsjöriscv : Tweak names of cpu flags, print flags in libavutil/tests/cpu
The names of the cpu flags, when parsed from a string with
av_parse_cpu_caps, are parsed by the libavutil eval functions. These
interpret dashes as subtractions. Therefore, these previous cpu flag
names haven't been possible to set.Use the official names for these extensions, as the previous ad-hoc
names wasn't parseable.libavutil/tests/cpu tests that the cpu flags can be set, and prints
the detected flags.Acked-by : Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by : Martin Storsjö <martin@martin.st> -
For custom FFmpeg I/O API : incorrect data about write_packet Callback function of avio_alloc_context()
13 avril 2021, par oahoI would like to ask a question about ffmpeg custom I/O api.


Description of the problem : I used the FFmpeg official example - remuxing.c to test a simple conversion package operation. (test1.ts -> test1.mp4). This operation result is normal.
But when I use Custom I/O




avio_alloc_context(buf, 65535, 1, nullptr, nullptr, write_cb, seek) ;




function, the custom I/O output is written to the memory, and then the mp4 file is written from this memory. It is found that the output file data is different from Internal file Protocol write. VLC and MediaInfo can't probe it.


I Use Beyond Compare4 to compare file data :


Beyond Compare 4 Picture Comparasion


In this picture, the left is my customized I/O output, and the right is the official example (according to file URLProtocol to Output)


I tested it many times, and each time the data size, data location, and data content are all in the same place. When I change the data of the few bytes with the difference on the left to the data on the right, VLC can play normally.


Is my operation improper, or is it another problem ?


Source Code :


extern "C"{
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
}
#include 
#include <cstdio>
#include 
void process_error(int ret, const char* info)
{
 if( ret < 0)
 {
 fprintf(stderr, info);
 std::exit(-1);
 }
}
int fd;
int write_packet(void *opaque, uint8_t *buf, int buf_size)
 {
 int ret;
 ret = write(fd, buf, buf_size);
 printf("write bytes %d\n", ret);
 return (ret == -1) ? AVERROR(errno) : ret;
 }
 int64_t seek(void *opaque, int64_t offset, int whence)
 {
 return offset;
 }

 int main()
 {

 fd = open("/home/oaho/Desktop/22.mp4", O_CREAT | O_WRONLY, 0777);
 if ( fd < 0)
 {
 return -1;
 }


 AVFormatContext *inputContext = nullptr;
 AVFormatContext *ouputContext = nullptr;

 int ret = avformat_open_input(&inputContext, "/home/oaho/Desktop/test1.ts", nullptr, nullptr);
 process_error(ret, "could'not open input\n");
 ret = avformat_find_stream_info(inputContext, nullptr);
 process_error(ret, "could'not find stream information\n");

 avformat_alloc_output_context2(&ouputContext, nullptr, "mp4", nullptr);
 if( ouputContext == nullptr)
 process_error(-1, "could'not alloc outputContext\n");


 if( ouputContext->oformat==nullptr)
 {
 ouputContext->oformat = av_guess_format("mp4", nullptr, nullptr);
 }

 uint8_t* buf = nullptr;
 buf = (uint8_t*)av_malloc(200 * 1024);
 if( buf == nullptr)
 {
 return -1;
 }
 ouputContext->pb = nullptr;
 ouputContext->pb = avio_alloc_context(buf, 200 * 1024, 1, nullptr, nullptr, write_packet, seek);
 if( ouputContext->pb == nullptr)
 {
 return -1;
 }
 ouputContext->flags = AVFMT_FLAG_CUSTOM_IO;
 //pre the stream avalible
 int *arr = new int[inputContext->nb_streams];
 if( arr == nullptr )
 process_error(-1, "can't alloc array\n");
 int stream_index = 0;
 //get stream : video stream , audio stream , subtitle stream
 for(int i = 0;i < inputContext->nb_streams;i++)
 {
 //get the single stream
 AVStream *stream = inputContext->streams[i];
 AVStream *outStream = nullptr;
 AVCodecParameters *codec = stream->codecpar;
 if( codec -> codec_type != AVMediaType::AVMEDIA_TYPE_VIDEO
 && codec -> codec_type != AVMediaType::AVMEDIA_TYPE_AUDIO
 && codec -> codec_type != AVMediaType::AVMEDIA_TYPE_SUBTITLE)
 {
 arr[i] = -1;
 continue;
 }
 arr[i] = stream_index++;
 outStream = avformat_new_stream(ouputContext, nullptr);
 if(outStream == nullptr)
 goto end;
 int ret = avcodec_parameters_copy(outStream->codecpar, stream->codecpar);
 if( ret < 0)
 goto end;
 //not include additional information
 outStream->codecpar->codec_tag = 0;
 }
 ret = avformat_write_header(ouputContext, nullptr);
 process_error(ret, "can't write header\n");

 while(1)
 {
 AVPacket pkt;
 av_init_packet(&pkt);
 AVStream *in_stream, *out_stream;
 ret = av_read_frame(inputContext, &pkt);
 if( ret < 0)
 break; 
 in_stream = inputContext->streams[pkt.stream_index];
 if (arr[pkt.stream_index] < 0) {
 av_packet_unref(&pkt);
 continue;
 }
 pkt.stream_index = arr[pkt.stream_index];
 out_stream = ouputContext->streams[pkt.stream_index];
 /* copy packet */
 pkt.pts = av_rescale_q(pkt.pts, in_stream->time_base, out_stream->time_base);
 pkt.dts = av_rescale_q(pkt.dts, in_stream->time_base, out_stream->time_base);
 pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
 pkt.pos = -1;
 //log_packet(ofmt_ctx, &pkt, "out");
 ret = av_interleaved_write_frame(ouputContext, &pkt);
 if (ret < 0) {
 fprintf(stderr, "Error muxing packet\n");
 break;
 }
 av_packet_unref(&pkt);
 }
 av_write_trailer(ouputContext);
 end:
 close(fd);
 delete [] arr;
 avformat_free_context(inputContext);
 avformat_free_context(ouputContext);
 return 0;
 }
</cstdio>


-
Exactly what parameters are needed by libva/VAAPI to decode an H.264 video frame ?
12 juillet 2022, par SynthetixI've got a basic Linux app running on supported Intel hardware that uses Intel's libva (VAAPI) to decode H.264 frames from an MP4 file. I have the entire thing working except the part where the frame gets submitted to the GPU/decoder. What's unclear is exactly what information to submit, when, and it what order. I don't see any official documentation on this, either. Here's the point in the code I'm referring to :


vaBeginPicture(...)
vaRenderPicture(...)
vaEndPicture(...)



The functions vaBeginPicture and vaEndPicture are self-explanatory, but my issue is with vaRenderPicture. I would expect to need to send the SPS and PPS (out of the AVCC atom in the MP4 file), then each frame, or slice of frames to the decoder via vaRenderPicture(). But this isn't mentioned anywhere other than in code examples I've found online. From some of these examples, I've surmised the following :


vaRenderPicture() // call 1/4: VAPictureParameterBufferH264: Send picture params? e.g. frame size and SPS/PPS?
vaRenderPicture() // call 2/4: VAIQMatrixBufferH264: Send inverse matrix? Where do I get this?
vaRenderPicture() // call 3/4: VASliceParameterBufferH264: Parameters of the next slice of H.264 picture data?
vaRenderPicture() // call 4/4: Slice Data? The actual compressed H.264 data from the file?



I have a very rudimentary understanding of how H.264 data is arranged in an MP4. But the libva documentation, as far as I can tell, does not explain exactly what is needed and in what order to successfully decode a frame. Furthermore, the buffer structures submitted to the decoder have an extensive amount of fields, which implies I need to know a ton of information about the frames before I submit them. In other video APIs I've used, none of this is needed. Why so complex ?


Any pointers to documentation on exactly what parameters, data and how to arrange it all before submitting to the VAAPI decoder would be much appreciated.