
Recherche avancée
Autres articles (37)
-
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (9281)
-
Android : BitmapFactory.decodeStream() returns null after first success
27 avril 2014, par giraffeMy code is intended to update an ImageView with an image from a server when a UI button is pressed.
The client side code shown below is a Runnable that runs when the button is pressed. The server is a desktop Java application with ffmpeg running in the background, continuously updating image.png with an image from the webcam. When the button is pressed on the Android app, the Android app attempts to receive image.png from the server, and because ffmpeg is constantly updating this image, it should be the most recent image taken with the server’s webcam.
My problem is that the first button press shows the correct image, but every subsequent button press will just clear out my ImageView. BitmapFactory.decodeStream() is returning null every time I call it after the first time.
Client side (runs when button is pressed) :
InputStream inputStream = s.getInputStream();
img = BitmapFactory.decodeStream(inputStream);
jpgView.setImageBitmap(img);
jpgView.invalidate();Server side :
ServerSocket sock = new ServerSocket(PORT_NUMBER);
Socket clientSocket = sock.accept();
for (;;) {
File f = new File("C:/folder/image.png");
FileInputStream fileInput = new FileInputStream(f);
BufferedInputStream bufferedInput = new BufferedInputStream(fileInput);
OutputStream outStream = clientSocket.getOutputStream();
try {
byte[] outBuffer = new byte[fSize];
int bRead = bufferedInput.read(outBuffer, 0, outBuffer.length);
outStream.write(outBuffer, 0, bRead);
outStream.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
bufferedInput.close();
}
} -
FFmpeg - avcodec_receive_frame returns 0 but frames are invalid
9 juillet 2019, par JinxI’ve been trying to extract images from videos, but not those with PNG codec. My code works fine with those with JPEG.
avcodec_receive_frame
worked successfully but the data of the frames was like trash ? Do I have to do something special to demuxing when dealing with PNG ?Exception thrown at 0x00007FFF7DF34B9A (msvcrt.dll) in Program.exe : 0xC0000005 : Access violation reading location 0x00000000000003F0 when calling
avcodec_send_frame
in mysaveImage
function, which means I was accessing invalid or unalloacted memory I guess. How this happened ?Just suppose all the function calls returned 0 until exception thrown.
Decoding :
bool ImageExtractor::decode() {
// some other code here
ret = avcodec_send_packet(codec_ctx, packet); // returned 0
ret = avcodec_receive_frame(codec_ctx, frame); // returned 0
if (ret == 0) {
if (count >= target_frame) {
snprintf(buf, sizeof(buf), "%s/%d.png", destination.toLocal8Bit().data(), count);
saveImage(frame, buf); // a function that writes images on disk
}
// some other code here
}
bool ImageExtractor::saveImage(AVFrame *frame, char *destination) {
AVFormatContext *imgFmtCtx = avformat_alloc_context();
pFormatCtx->oformat = av_guess_format("mjpeg", NULL, NULL);
// some other code here
if (!frame)
std::cout << "invalid frame \n";
if (!imgCodecCtx) // AV_CODEC_ID_MJPEG(7)
std::cout << "invalid codec ctx \n";
ret = avcodec_send_frame(imgCodecCtx, frame); // everything stopped here
}Demuxing :
avformat_open_input(&format_ctx, source.toLocal8Bit(), nullptr, nullptr);
vsi = av_find_best_stream(format_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
codec_par = avcodec_parameters_alloc();
avcodec_parameters_copy(codec_par, format_ctx->streams[vsi]->codecpar);
AVCodec* codec = avcodec_find_decoder(codec_par->codec_id); // AV_CODEC_ID_PNG(61)
codec_ctx = avcodec_alloc_context3(codec);
avcodec_parameters_to_context(codec_ctx, codec_par);
avcodec_parameters_free(&codec_par);
avcodec_open2(codec_ctx, codec, 0); -
ffmpeg : libavfilter API av_buffersink_get_frame returns alway EAGAIN
29 juin 2024, par aculnaigI want to resize an image with
libavfilter
C API throughzscale
filter andlibplacebo
filter but no matter how I callav_buffersink_get_frame
, it always returnsEAGAIN
and no data is filled in thefiltered_frame
.

#include 
#include 
#include 
#include 
#include 

#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libavfilter></libavfilter>buffersrc.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libavutil></libavutil>log.h>

int main(int argc, char **argv)
{
 if (argc != 2) {
 fprintf(stderr, "usage: %s <filename>\n", argv[0]);
 exit(EXIT_FAILURE);
 }

 const char *src_name = argv[1];
 const char *dst_name = basename(src_name);
 int ret = 0;

 const enum AVPixelFormat src_format = AV_PIX_FMT_YUV422P;

 av_log_set_level(AV_LOG_TRACE);

 AVFormatContext *fmt_ctx = avformat_alloc_context();
 if (fmt_ctx == NULL) {
 av_log(NULL, AV_LOG_TRACE, "avformat_alloc_context(): failed.\n");
 exit(EXIT_SUCCESS);
 }
 if ((ret = avformat_open_input(&fmt_ctx, src_name, NULL, NULL)) != 0) {
 av_log(NULL, AV_LOG_TRACE, "avformat_open_input(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 AVCodecContext *dec_ctx;
 AVCodec *dec;

 if ((ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "av_find_best_stream(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 dec_ctx = avcodec_alloc_context3(dec);
 if (dec_ctx == NULL) {
 av_log(NULL, AV_LOG_TRACE, "avcodec_alloc_context3(): failed.\n");
 exit(EXIT_SUCCESS);
 }
 if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avcodec_open2(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 AVFrame *frame = av_frame_alloc();
 if (frame == NULL) {
 av_log(NULL, AV_LOG_TRACE, "av_frame_alloc(): failed.\n");
 exit(EXIT_SUCCESS);
 }
 AVPacket *packet = av_packet_alloc();
 if (packet == NULL) {
 av_log(NULL, AV_LOG_TRACE, "av_packet_alloc(): failed.\n");
 exit(EXIT_SUCCESS);
 }

 if ((ret = av_read_frame(fmt_ctx, packet)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "av_read_frame(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 ret = avcodec_send_packet(dec_ctx, packet);
 if (ret == AVERROR(EAGAIN)) {
 av_log(NULL, AV_LOG_TRACE, "avcodec_send_packet(): %s.\n", av_err2str(ret));
 avcodec_receive_frame(dec_ctx, frame);
 avcodec_send_packet(dec_ctx, packet);
 }
 
 if ((ret = avcodec_receive_frame(dec_ctx, frame)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avcodec_receive_frame(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 av_packet_unref(packet);

 av_log(NULL, AV_LOG_TRACE, "filename: %s, w: %d, h: %d, fmt: %d\n", src_name, frame->width, frame->height, frame->format);

 AVFilterGraph *filter_graph = avfilter_graph_alloc();

 char buffersrc_args[512];
 snprintf(buffersrc_args, sizeof(buffersrc_args), "video_size=%dx%d:pix_fmt=%d:time_base=1/25", frame->width, frame->height, frame->format); 

 AVFilterContext *buffersrc_ctx;
 if ((ret = avfilter_graph_create_filter(&buffersrc_ctx, avfilter_get_by_name("buffer"), NULL, buffersrc_args, NULL, filter_graph)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 char libplacebo_args[512];
 snprintf(libplacebo_args, sizeof(libplacebo_args), "format=yuv420p:colorspace=bt470bg:color_primaries=bt709:color_trc=iec61966-2-1:range=pc:w=(iw/2):h=(ih/2):downscaler=none:dithering=none");
 AVFilterContext *libplacebo_ctx;
 if ((ret = avfilter_graph_create_filter(&libplacebo_ctx, avfilter_get_by_name("libplacebo"), NULL, libplacebo_args, NULL, filter_graph)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 AVFilterContext *buffersink_ctx;
 if ((ret = avfilter_graph_create_filter(&buffersink_ctx, avfilter_get_by_name("buffersink"), NULL, NULL, NULL, filter_graph)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_graph_create_filter(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 if ((ret = avfilter_link(buffersrc_ctx, 0, libplacebo_ctx, 0)) != 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_link(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 if ((ret = avfilter_link(libplacebo_ctx, 0, buffersink_ctx, 0)) != 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_link(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 
 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "avfilter_graph_config(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }

 AVFrame *filtered_frame = av_frame_alloc();
 if (filtered_frame == NULL) {
 av_log(NULL, AV_LOG_TRACE, "av_frame_alloc(): failed.\n");
 exit(EXIT_SUCCESS);
 }
 if ((ret = av_buffersrc_add_frame(buffersrc_ctx, frame)) < 0) {
 av_log(NULL, AV_LOG_TRACE, "av_buffersrc_add_frame(): %s.\n", av_err2str(ret));
 exit(EXIT_SUCCESS);
 }
 
 while (1) {
 int ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 if (ret < 0) {
 av_log(NULL, AV_LOG_TRACE, "av_buffersink_get_frame(): %s.\n", av_err2str(ret));
 exit(EXIT_FAILURE);
 } 
 }

 av_log(NULL, AV_LOG_TRACE, "filename: %s, w: %d, h: %d, f: %d\n", dst_name, filtered_frame->width, filtered_frame->height, filtered_frame->format);

 AVCodecContext *enc_ctx;
 AVCodec *enc;
 AVPacket *enc_packet = av_packet_alloc();

 enc = avcodec_find_encoder_by_name("mjpeg");
 enc_ctx = avcodec_alloc_context3(enc);
 enc_ctx->width = filtered_frame->width;
 enc_ctx->height = filtered_frame->height;
 enc_ctx->bit_rate = dec_ctx->bit_rate * 1024;
 enc_ctx->time_base = (AVRational) {1, 25};
 enc_ctx->framerate = (AVRational) {25, 1};
 enc_ctx->pix_fmt = filtered_frame->format;
 enc_ctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
 enc_ctx->compression_level = 0;
 enc_ctx->color_range = AVCOL_RANGE_JPEG;

 avcodec_open2(enc_ctx, enc, NULL);

 avcodec_send_frame(enc_ctx, filtered_frame);
 avcodec_receive_packet(enc_ctx, enc_packet);

 FILE *dst_file = fopen(dst_name, "wb");
 fwrite(enc_packet->data, 1, enc_packet->size, dst_file);
 fclose(dst_file);

 av_packet_unref(enc_packet);
 av_frame_free(&frame);
 av_frame_free(&filtered_frame);
 
 avfilter_graph_free(&filter_graph);

 avformat_close_input(&fmt_ctx);
 avformat_free_context(fmt_ctx);
 
 exit(EXIT_SUCCESS);
}
</filename>


and here are the logs




PS : the logs were so many I could paste in stackoverflow