Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (75)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (9629)

  • Programatically get non-overlapping images from MP4

    15 février 2013, par Carlos F

    My ultimate goal is to get meaningful snapshots from MP4 videos that are either 30 min or 1 hour long. "Meaningful" is a bit ambitious, so I have simplified my requirements.

    The image should be crisp - non-overlapping, and ideally not blurry. Initially, I thought getting a keyframe would work, but I had no idea that keyframes could have overlapping images embedded in them like this :enter image description here

    Of course, some keyframe images look like this and those are much better :

    enter image description here

    I was wondering if someone might have source code to :

    Take a sequence of say 10-15 continuous keyframes (jpg or png) and identify the best keyframe from all of them.

    This must happen entirely programaticaly. I found this paper : http://research.microsoft.com/pubs/68802/blur_determination_compressed.pdf

    and felt that I could "rank" a few images based on the above paper, but then I was dissuaded by this link : Extracting DCT coefficients from encoded images and video given that my source video is an MP4. Of course, this confuses me because the input into the system is just a sequence of jpg images.

    Another link that is interesting is :

    Detection of Blur in Images/Video sequences

    However, I am not sure if this will work for "overlapping" images.

    Any ideas ?

  • How do I convert ADPCM to PCM using FFmpeg ?

    26 janvier 2013, par mystafer

    I have a video feed that sends me audio using the ADPCM codec. However, android only supports PCM format. How can I convert the ADPCM audio feed into a PCM audio feed ?

    The answer to this may be similar to the answer to this question.

    I have successfully decoded the frame with this code :

    int len = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &frameFinished, &packet);

    Is the secret here to use a reverse encode function ?

    Here is what I have so far in my audio decode function :

    if(packet_queue_get(env, javaThread, pAudioPacketQueue, &packet, 1) < 0) {
       LOGE("audio - after get packet failed");
       return;
    }
    LOGD("Dequeued audio packet");

    // calculate frame size
    int frameSize;
    if (pPcmAudioCodecCtx->frame_size) {
       frameSize = pPcmAudioCodecCtx->frame_size;
    } else {
       /* if frame_size is not set, the number of samples must be
        * calculated from the buffer size */
       int64_t nb_samples = (int64_t)AUDIO_PCM_OUTBUFF_SIZE * 8 /
               (av_get_bits_per_sample(pPcmAudioCodecCtx->codec_id) *
                       pPcmAudioCodecCtx->channels);
       frameSize = nb_samples;
    }

    int pcmBytesPerSample = av_get_bytes_per_sample(pPcmAudioCodecCtx->sample_fmt);
    int pcmFrameBytes = frameSize * pcmBytesPerSample * pPcmAudioCodecCtx->channels;

    uint8_t *pDataStart = packet.data;
    while(packet.size > 0) {
       int len = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &frameFinished, &packet);
       LOGD("Decoded ADPCM frame");

       if (len < 0) {
           LOGE("Error while decoding audio");
           return;
       }

       if (frameFinished) {
           // store frame data in FIFO buffer
           uint8_t *inputBuffer = pAudioFrame->data[0];
           int inputBufferSize = pAudioFrame->linesize[0];
           av_fifo_generic_write(fifoBuffer, inputBuffer, inputBufferSize, NULL);
           LOGD("Added ADPCM frame to FIFO buffer");

           // check if fifo buffer has enough data for a PCM frame
           while (av_fifo_size(fifoBuffer) >= pcmFrameBytes) {
               LOGI("PCM frame data in FIFO buffer");

               // read frame's worth of data from FIFO buffer
               av_fifo_generic_read(fifoBuffer, pAudioPcmOutBuffer, pcmFrameBytes, NULL);
               LOGD("Read data from FIFO buffer into pcm frame");


               avcodec_get_frame_defaults(pPcmAudioFrame);
               LOGD("Got frame defaults");

               pPcmAudioFrame->nb_samples = pcmFrameBytes / (pPcmAudioCodecCtx->channels *
                       pcmBytesPerSample);

               avcodec_fill_audio_frame(pPcmAudioFrame, pPcmAudioCodecCtx->channels,
                       pPcmAudioCodecCtx->sample_fmt,
                       pAudioPcmOutBuffer, pcmFrameBytes, 1);
               LOGD("Filled frame audio with data");

               // fill audio play buffer
               int dataSize = pPcmAudioFrame->linesize[0];
               LOGD("Data to output: %d", dataSize);
               jbyteArray audioPlayBuffer = (jbyteArray) env->GetObjectField(ffmpegCtx, env->GetFieldID(cls, "audioPlayBuffer", "[B"));
               jbyte *bytes = env->GetByteArrayElements(audioPlayBuffer, NULL);
               memcpy(bytes, pPcmAudioFrame->data[0], dataSize);
               env->ReleaseByteArrayElements(audioPlayBuffer, bytes, 0);
               LOGD("Copied data into Java array");

               env->CallVoidMethod(player, env->GetMethodID(playerCls, "updateAudio", "(I)V"), dataSize);
           }
  • Restreaming video containing two languages live with ffmpeg

    9 novembre 2012, par user1810837

    I have a project where i need to restream a live stream which has two languages setup on the audio.
    Spanish on left and English on right

    The stream mapping is :

    Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p, 512x288 [SAR 1:1 DAR 16:9], q=2-31, 1k tbn, 1k tbc
    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, s16, 18 kb/s

    I need to restream this back live with just the English from the right side or just spanish from the left side, I tried looking everywhere but did not find any type of solution .

    Since this needs to be done live, I can't be using other programs to separate video and audio to get it done.

    This needs to be done through ffmpeg and I wonder if it even capable of doing so with original built or it would need some custom modification.