Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (9699)

  • use ffmpeg api to convert audio files. crash on avcodec_encode_audio2

    17 février 2014, par fabian

    From the examples I got the basic idea of this code.
    However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
    all use different approaches.

    The process of converting an audio file to another file should go roughly like this :
    inputfile -demux-> audiostream -read-> inPackets -decode2frames->
    frames
    - encode2packets-> outPackets -write-> audiostream -mux-> outputfile

    However I found the following comment in demuxing.c :
    /* Write the raw audio data samples of the first plane. This works
    * fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
    * most audio decoders output planar audio, which uses a separate
    * plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
    * In other words, this code will write only the first audio channel
    * in these cases.
    * You should use libswresample or libavfilter to convert the frame
    * to packed data. */

    My questions about this are :

    1. Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
      avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
      the resampling step mentioned in the comment mandatory ?

    2. Am I taking the right approach ? ffmpeg is very asymmetric, i.e. if there is a function
      open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
      schemes, so it's very hard to tell, if the general approach is right, or actually an
      ugly mixture of techniques that where used at different version bumps of ffmpeg.

    3. ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio ?

    Here is my code so far that right now crashes at avcodec_encode_audio2
    and I don't know why.

    int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
       jboolean isCopy;
       jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
       jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
       logging = (*env)->GetStaticIntField(env, configClass, fid);

       /// open input
       const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
       AVFormatContext* pInputCtx;
       AVStream* pInputStream;
       open_input(sourceFile, &pInputCtx, &pInputStream);

       // open output
       const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
       const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
       const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
       AVFormatContext* pOutputCtx;
       AVOutputFormat* pOutputFmt;
       AVStream* pOutputStream;
       open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);

       /// decode/encode
       error = avformat_write_header(pOutputCtx, NULL);
       DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));

       AVFrame* frame = avcodec_alloc_frame();
       DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
       frame->pts = 0;

       LOGI("allocate packet");
       AVPacket pktIn;
       AVPacket pktOut;
       LOGI("done");
       int got_frame, got_packet, len, frame_count = 0;
       int64_t processed_time = 0, duration = pInputStream->duration;
       while (av_read_frame(pInputCtx, &pktIn) >= 0) {
           do {
               len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
               DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
               if (len < 0) break;
               len = FFMIN(len, pktIn.size);
               size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
               LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
               if (got_frame) {
                   do {
                       av_init_packet(&pktOut);
                       pktOut.data = NULL;
                       pktOut.size = 0;
                       LOGI("encode frame");
                       DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
                       DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
                       DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
                       LOGI("tests done");
                       len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
                       LOGI("encode done");
                       DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
                   } while (!got_packet);
                   // write packet;
                   LOGI("write packet");
                   /* Write the compressed frame to the media file. */
                   error = av_interleaved_write_frame(pOutputCtx, &pktOut);
                   DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
                   av_free_packet(&pktOut);
               }
               pktIn.data += len;
               pktIn.size -= len;
           } while (pktIn.size > 0);
           av_free_packet(&pktIn);
       }

       LOGI("write trailer");
       av_write_trailer(pOutputCtx);
       LOGI("end");

       /// close resources
       avcodec_free_frame(&frame);
       avcodec_close(pInputStream->codec);
       av_free(pInputStream->codec);
       avcodec_close(pOutputStream->codec);
       av_free(pOutputStream->codec);
       avformat_close_input(&pInputCtx);
       avformat_free_context(pOutputCtx);

       return 0;
    }
  • H.264 - Identify Access Units of an image

    7 décembre 2017, par ivan_filho

    I need to parse a H.264 stream to collect only NAL’s needed to form a complete image, of only one frame. I’m reading the H.264 standard, but it’s confuse and hard to read. I made some experiments but, did not worked. For example, i extracted an access unit with primary_pic_type == 0 containing only slice_type == 7 (I-Slice), it should give me a frame, but i tried to extract from ffmpeg, it did not work. But, when i append the next access_unit, containing only slice_type == 5 (P-Slice) it worked. Maybe i need to extract POC information, but i think not, because i only need extract one frame, but i’m not sure. Someone have some tip of how get only NAL’s i need to form one complete image ?

  • FFmpeg : how to burn any kinds of subtitle into videos

    28 décembre 2017, par MinhLee

    as title, i want to encode hardsub some mkv videos which contain subtitles. These subs is maybe ASS, SRT or picture-based.
    I have read FFmpeg Document and stuffs, but only found the way to burn sub if the type of sub is known.

    I try to get subtitle info using mediainfo :

    mediainfo "--Output=Text;%Format%\r\n" input.mkv

    The output is "ASS" with ass , "Text" or "Subrip" or "UTF-8" with SRT....
    But i find it hard to keep going because of many kind of output i can get, which is still so hard to create a batch file to tell ffmpeg to auto dectected kind of subs so auto choose correct filter.

    Is there any ways to do this ? Thanks !