
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (70)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (13016)
-
How to pass BytesIO image objects to ffmpeg ?
13 avril 2023, par Mr.SlowI a have a (nested) list od BytesIO objects (images) that I would like to pass to ffmpeg and make a video. I do know, the ffmpeg cannot take it straight. What should I convert it in first ? There might be a better way using 'pipe :', which I did not succeed to implement yet.
(in this example code I ignore image duration and audio, too)


def merge_videos(file_id: float, audio_list: List[BinaryIO], duration_list: List[float], images_nested_list):
 # flatten the nested list of images
 images_list = [image for images_sublist in images_nested_list for image in images_sublist]
 
 additional_parameters = {'c:a': 'aac', 'c:v': 'libx264'}

 # Create a BytesIO object to hold the output video data
 output_data = io.BytesIO()

 # create the FFmpeg command with the specified parameters and pipe the output to the BytesIO object
 command = ffmpeg.output(*images_list, '-', vf='fps=10,format=yuv420p', preset='veryfast', shortest=None, r=10, max_muxing_queue_size=4000, **additional_parameters).pipe(output_data)

 try:
 # run the FFmpeg command with error and output capture
 subprocess.check_output(['ffmpeg', '-y', '-f', 'concat', '-safe', '0', '-i', 'audio.txt', '-i', '-', '-c:v', 'copy', '-c:a', 'aac', f"{PROJECT_PATH}/data/final-{file_id}.mp4"], input=output_data.getvalue())
 log.info("Final video with file_id %s has been converted successfully", file_id)



...this code returns :


TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got <class>
</class>


How to handle it please ? Thanks for help.


-
avcodec/dpx : fix check of minimal data size for unpadded content
19 octobre 2022, par Jerome Martinezavcodec/dpx : fix check of minimal data size for unpadded content
stride value is not relevant with unpadded content and the total count
of pixels (width x height) must be used instead of the rounding based on
width only then multiplied by heightunpadded_10bit value computing is moved sooner in the code in order to
be able to use it during computing of minimal content size. Also make sure to
only set it for 10bit.Fix 'Overread buffer' error when the content is not lucky enough to have
(enough) padding bytes at the end for not being rejected by the formula
based on the stride valueFixes ticket #10259.
Signed-off-by : Jerome Martinez <jerome@mediaarea.net>
Signed-off-by : Marton Balint <cus@passwd.hu> -
How AVCodecContext bitrate, framerate and timebase is used when encoding single frame
28 mars 2023, par CyrusI am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).


I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.


EDIT :


Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.


#include "thumbnail.h"
#include "libavcodec/avcodec.h"
#include "libavutil/imgutils.h"
#include 
#include 
#include 

void print_averror(int error_code) {
 char err_msg[100] = {0};
 av_strerror(error_code, err_msg, 100);
 printf("Reason: %s\n", err_msg);
}

ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {
 const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);
 if (!mjpeg_codec) {
 printf("Codec for mjpeg cannot be found.\n");
 return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;
 }

 AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);
 if (!codec_ctx) {
 printf("Codec context cannot be allocated for the given mjpeg codec.\n");
 return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;
 }

 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 printf("Thumbnail packet cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;
 }

 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 printf("Thumbnail frame cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;
 }

 // The part that I don't understand
 codec_ctx->bit_rate = 400000;
 codec_ctx->width = thumbnail_width;
 codec_ctx->height = thumbnail_height;
 codec_ctx->time_base = (AVRational){1, 25};
 codec_ctx->framerate = (AVRational){1, 25};

 codec_ctx->gop_size = 10;
 codec_ctx->max_b_frames = 1;
 codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
 int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);
 if (ret < 0) {
 print_averror(ret);
 printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);
 return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;
 }

 ret = avcodec_send_frame(codec_ctx, frame);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to send frame to encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to receive packet from encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 // store the thumbnail in output
 int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);
 write(fd, pkt->data, pkt->size);
 close(fd);

 // freeing allocated structs
 avcodec_free_context(&codec_ctx);
 av_frame_free(&frame);
 av_packet_free(&pkt);
 return FFMPEG_SUCCESS;
}