Recherche avancée

Médias (0)

Mot : - Tags -/latitude

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (106)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

Sur d’autres sites (7938)

  • Evolution of Multimedia Fiefdoms

    1er octobre 2014, par Multimedia Mike — General

    I want to examine how multimedia fiefdoms have risen and fallen through the years.


    Medieval Castle

    Back in the day, the multimedia fiefdoms were built around the formats put forth by competing companies : there was Microsoft/WMV, Apple/MOV, and Real/RM as the big contenders. On2 always wanted to be a player in this arena but could never quite catch a break. A few brave contenders held the line for open source and also for the power users who desired one application that could handle everything (my original motivation for wanting to get into multimedia hacking).

    The computer desktop was the battleground for internet-based media stream. Whatever happened to those days ? Actually, if memory serves, Flash-based video streaming stepped on all of them.

    Over the last 6-7 years, the battleground has expanded to cover mobile devices, where Flash’s impact has… lessened. During this time, multimedia technology pretty well standardized on a particular stack, namely, the MPEG (MP4/H.264/AAC) stack.

    The belligerents in this war tried for years to effectively penetrate new territory, namely, the living room where the television lived. This had been slowgoing for years due to various user interface and content issues, but steadily improved.

    Last April, Amazon announced their entry into the set-top box market with the Fire TV. That was when it suddenly crystallized for me that the multimedia ecosystem has radically shifted. Now, the multimedia fiefdoms revolve around access to content via streaming services.

    Off the top of my head, here are some of the fiefdoms these days (fiefdoms I have experience using) :

    • Netflix (subscription streaming)
    • Amazon (subscription, rental, and purchased streaming)
    • Hulu Plus (subscription streaming)
    • Apple (rental and purchased media)

    I checked some results on Can I Stream.It ? (which I refer to often) and found a bunch more streaming fiefdoms such as Google (both Play and YouTube, which are separate services), Sony, Xbox 360, Crackle, Redbox Instant, Vudu, Target Ticket, Epix, Sony, SnagFilms, and XFINITY StreamPix. And surely, these are probably just services available in the United States ; I know other geographical regions have their own fiefdoms.

    What happened ?

    When I got into multimedia hacking, there were all these disparate, competing ecosystems. As a consumer, I didn’t care where the media came from, I just wanted to play it. That’s what inspired me to work on open source multimedia projects. Now I realize that I have the same problem 10-15 years later : there are multiple competing ecosystems. I might subscribe to fiefdoms X and Y, but am frustrated to learn that something I’d like to watch is only available through fiefdom Z. Very few of these fiefdoms can be penetrated using open source technology.

    I’m not really sure about the point about this whole post. Multimedia technology seems really standardized these days. But that’s probably just my perspective because I have spent way too long focusing on a few areas of multimedia technology such as audio and video coding. It’s interesting that all these services probably leverage the same limited number of codecs. Their differentiation comes from the catalog of content that each is able to license for streaming. There are different problems to solve in the multimedia arena now.

  • How can I pass audio frames from an input .mp4 to an output .mp4 in libavcodec ?

    30 mai 2021, par Sniggerfardimungus

    I have a project that correctly opens a .mp4, extracts video frames, modifies them, and dumps the modified frames to an output .mp4. Everything works (mostly - I have a video timing bug that pops up at random, but I'll kill it) EXCEPT for the writing of audio. I don't want to modify the audio channel at all - I just want the audio from the input .mp4 to be passed, unaltered, to the output .mp4.

    


    There's too much code here to provide a working example, largely because there's a lot of OpenGL and GLSL in there, but the most important part is where I advance a frame. This method is called in a loop, and if the frame was a video frame, the loop sends the image data to the rendering hardware, does a bunch of GL magic on it, then writes out a frame of video. If the frame was an audio frame, the loop does nothing, but the advance_frame() method is supposed to just dump that frame to the output mp4. I'm at a loss as to what libavcodec provides that will do this.

    


    Note that here, I'm decoding the audio packets into frames, but that shouldn't be necessary. I'd rather work with packets and not burn the CPU time to do the decode at all. (I've tried it the other way, but this is what I wound up with when I tried to decode the data, then re-encode to create the output stream.) I just need a way to pass the packets from the input to the output.

    


    bool MediaContainerMgr::advance_frame() {
    int ret; // Crappy naming, but I'm using ffmpeg's name for it.
    while (true) {
        ret = av_read_frame(m_format_context, m_packet);
        if (ret < 0) {
            // Do we actually need to unref the packet if it failed?
            av_packet_unref(m_packet);
            if (ret == AVERROR_EOF) {
                finalize_output();
                return false;
            }
            continue;
            //return false;
        }
        else {
            int response = decode_packet();
            if (response != 0) {
                continue;
            }
            // If this was an audio packet, the image renderer doesn't care about it - just push
            // the audio data to the output .mp4:
            if (m_packet->stream_index == m_audio_stream_index) {
                printf("m_packet->stream_index: %d\n", m_packet->stream_index);
                printf("  m_packet->pts: %lld\n", m_packet->pts);
                printf("  mpacket->size: %d\n", m_packet->size);
                // m_recording is true if we're writing a .mp4, as opposed to just letting OpenGL
                // display the frames onscreen.
                if (m_recording) {
                    int err = 0;
                    // I've tried everything I can find to try to push the audio frame to the
                    // output .mp4. This doesn't work, but neither do a half-dozen other libavcodec
                    // methods:
                    err = avcodec_send_frame(m_output_audio_codec_context, m_last_audio_frame);

                    if (err) {
                        printf("  encoding error: %d\n", err);
                    }
                }
            }
            av_packet_unref(m_packet);
            if (m_packet->stream_index == m_video_stream_index) {
                return true;
            }
        }
    }
}


    


    The workhorse of advance_frame() is decode_packet(). All of this works perfectly for video data :

    


    int MediaContainerMgr::decode_packet() {
    // Supply raw packet data as input to a decoder
    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
    int             response;
    AVCodecContext* codec_context = nullptr;
    AVFrame*        frame         = nullptr;

    if (m_packet->stream_index == m_video_stream_index) {
        codec_context = m_video_input_codec_context;
        frame = m_last_video_frame;
    }
    if (m_packet->stream_index == m_audio_stream_index) {
        codec_context = m_audio_input_codec_context;
        frame = m_last_audio_frame;
    }

    if (codec_context == nullptr) {
        return -1;
    }

    response = avcodec_send_packet(codec_context, m_packet);
    if (response < 0) {
        char buf[256];
        av_strerror(response, buf, 256);
        printf("Error while receiving a frame from the decoder: %s\n", buf);
        return response;
    }

    // Return decoded output data (into a frame) from a decoder
    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
    response = avcodec_receive_frame(codec_context, frame);
    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
        return response;
    } else if (response < 0) {
        char buf[256];
        av_strerror(response, buf, 256);
        printf("Error while receiving a frame from the decoder: %s\n", buf);
        return response;
    } else {
        printf(
            "Stream %d, Frame %d (type=%c, size=%d bytes), pts %lld, key_frame %d, [DTS %d]\n",
            m_packet->stream_index,
            codec_context->frame_number,
            av_get_picture_type_char(frame->pict_type),
            frame->pkt_size,
            frame->pts,
            frame->key_frame,
            frame->coded_picture_number
        );
    }
    return 0;
}


    


    I can provide the setup for all the contexts, if necessary, but for brevity maybe we can get away with what av_dump_format(m_output_format_context, 0, filename, 1) displays :

    


    Output #0, mp4, to 'D:\yodeling_monkey_nuggets.mp4':
  Metadata:
    encoder         : Lavf58.64.100
    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080, q=-1--1, 20305 kb/s, 29.97 fps, 30k tbn
    Stream #0:1: Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 125 kb/s


    


  • Encoding images into a movie file

    5 avril 2014, par RuAware

    I am trying to save jpgs into a movie, I have tried jcodec and alothough my s3 plays it fine other devices do not. including vlc and windows media

    I have just spent most of the day playing with MediaCodec, although the SDK is so high, it will help people with jelly bean and above. But I can not work out how to get the Files to the encoder and then write the file.

    Ideally I wont to support down to SDK 9/8

    Has anyone got any code they can share, either to get MediaCodec to work or another option. If you say ffmpeg, I'd love to but my jin knowledge is non existent and I will need a very good guide.

    Code for MediaCodec so far

    public class EncodeAndMux extends AsyncTask {
       private static int bitRate = 2000000;
       private static int MAX_INPUT = 100000;
       private static String mimeType = "video/avc";

       private int frameRate = 15;    
       private int colorFormat;
       private int stride = 1;
       private int sliceHeight = 2;        

       private MediaCodec encoder = null;
       private MediaFormat inputFormat;
       private MediaCodecInfo codecInfo = null;
       private MediaMuxer muxer;
       private boolean mMuxerStarted = false;
       private int mTrackIndex = 0;  
       private long presentationTime = 0;
       private Paint bmpPaint;

       private static int WAITTIME = 10000;
       private static String TAG = "ENCODE";

       private ArrayList<string> mFilePaths;
       private String mPath;

       private EncodeListener mListener;
       private int width = 320;
       private int height = 240;
       private double mSpeed = 1;

       public EncodeAndMux(ArrayList<string> filePaths, String savePath) {
           mFilePaths = filePaths;
           mPath = savePath;  

           // Create paint to draw BMP
           bmpPaint = new Paint();
           bmpPaint.setAntiAlias(true);
           bmpPaint.setFilterBitmap(true);
           bmpPaint.setDither(true);
       }

       public void setListner(EncodeListener listener) {
           mListener = listener;
       }

       // set the speed, how many frames a second
       public void setSpead(int speed) {
           mSpeed = speed;
       }

       public double getSpeed() {
           return mSpeed;
       }

       private long computePresentationTime(int frameIndex) {
           final long ONE_SECOND = 1000000;
           return (long) (frameIndex * (ONE_SECOND / mSpeed));
       }

       public interface EncodeListener {
           public void finished();
           public void errored();
       }

       @TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2)
       @Override
       protected Boolean doInBackground(Integer... params) {

           try {
               muxer = new MediaMuxer(mPath, OutputFormat.MUXER_OUTPUT_MPEG_4);
           } catch (Exception e){
               e.printStackTrace();
           }

           // Find a code that supports the mime type
           int numCodecs = MediaCodecList.getCodecCount();
           for (int i = 0; i &lt; numCodecs &amp;&amp; codecInfo == null; i++) {
               MediaCodecInfo info = MediaCodecList.getCodecInfoAt(i);
               if (!info.isEncoder()) {
                   continue;
               }
               String[] types = info.getSupportedTypes();
               boolean found = false;

               for (int j = 0; j &lt; types.length &amp;&amp; !found; j++) {
                   if (types[j].equals(mimeType))
                       found = true;
               }

               if (!found)
                   continue;
               codecInfo = info;
           }


            for (int i = 0; i &lt; MediaCodecList.getCodecCount(); i++) {
                    MediaCodecInfo info = MediaCodecList.getCodecInfoAt(i);
                    if (!info.isEncoder()) {
                        continue;
                    }

                    String[] types = info.getSupportedTypes();
                    for (int j = 0; j &lt; types.length; ++j) {
                        if (types[j] != mimeType)
                            continue;
                        MediaCodecInfo.CodecCapabilities caps = info.getCapabilitiesForType(types[j]);
                        for (int k = 0; k &lt; caps.profileLevels.length; k++) {
                            if (caps.profileLevels[k].profile == MediaCodecInfo.CodecProfileLevel.AVCProfileHigh &amp;&amp; caps.profileLevels[k].level == MediaCodecInfo.CodecProfileLevel.AVCLevel4) {
                                codecInfo = info;
                            }
                        }
                    }
            }

           Log.d(TAG, "Found " + codecInfo.getName() + " supporting " + mimeType);

           MediaCodecInfo.CodecCapabilities capabilities = codecInfo.getCapabilitiesForType(mimeType);
           for (int i = 0; i &lt; capabilities.colorFormats.length &amp;&amp; colorFormat == 0; i++) {
               int format = capabilities.colorFormats[i];
               switch (format) {
                   case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar:
                   case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar:
                   case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar:
                   case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar:
                   case MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar:
                   colorFormat = format;
                   break;
               }
           }
           Log.d(TAG, "Using color format " + colorFormat);

           // Determine width, height and slice sizes
           if (codecInfo.getName().equals("OMX.TI.DUCATI1.VIDEO.H264E")) {
               // This codec doesn&#39;t support a width not a multiple of 16,
               // so round down.
               width &amp;= ~15;
           }

           stride = width;
           sliceHeight = height;

           if (codecInfo.getName().startsWith("OMX.Nvidia.")) {
               stride = (stride + 15) / 16 * 16;
               sliceHeight = (sliceHeight + 15) / 16 * 16;
           }

           inputFormat = MediaFormat.createVideoFormat(mimeType, width, height);
           inputFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
           inputFormat.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate);
           inputFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
           inputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
    //          inputFormat.setInteger("stride", stride);
    //          inputFormat.setInteger("slice-height", sliceHeight);
           inputFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, MAX_INPUT);

           encoder = MediaCodec.createByCodecName(codecInfo.getName());
           encoder.configure(inputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
           encoder.start();

           ByteBuffer[] inputBuffers = encoder.getInputBuffers();
           ByteBuffer[] outputBuffers = encoder.getOutputBuffers();

           int inputBufferIndex= -1, outputBufferIndex= -1;
           BufferInfo info = new BufferInfo();
           for (int i = 0; i &lt; mFilePaths.size(); i++) {

               // use decode sample to calculate inSample size and then resize
               Bitmap bitmapIn = Images.decodeSampledBitmapFromPath(mFilePaths.get(i), width, height);  

               // Create blank bitmap
               Bitmap bitmap = Bitmap.createBitmap(width, height, Config.ARGB_8888);                  

               // Center scaled image
               Canvas canvas = new Canvas(bitmap);                
               canvas.drawBitmap(bitmapIn,(bitmap.getWidth()/2)-(bitmapIn.getWidth()/2),(bitmap.getHeight()/2)-(bitmapIn.getHeight()/2), bmpPaint);

               Log.d(TAG, "Bitmap width: " + bitmapIn.getWidth() + " height: " + bitmapIn.getHeight() + " WIDTH: " + width + " HEIGHT: " + height);
               byte[] dat = getNV12(width, height, bitmap);
               bitmap.recycle();

               // Exception occurred on this below line in Emulator, LINE No. 182//**
               inputBufferIndex = encoder.dequeueInputBuffer(WAITTIME);
               Log.i("DAT", "Size= "+dat.length);

               if(inputBufferIndex >= 0){
                   int samplesiz= dat.length;
                   inputBuffers[inputBufferIndex].put(dat);
                   presentationTime = computePresentationTime(i);
                   if (i == mFilePaths.size()) {
                       encoder.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                       Log.i(TAG, "Last Frame");
                   } else {
                       encoder.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
                   }

                   while(true) {
                      outputBufferIndex = encoder.dequeueOutputBuffer(info, WAITTIME);
                      Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
                      if (outputBufferIndex >= 0) {
                          ByteBuffer encodedData = outputBuffers[outputBufferIndex];
                          if (encodedData == null) {
                              throw new RuntimeException("encoderOutputBuffer " + outputBufferIndex +
                                      " was null");
                          }

                          if ((info.flags &amp; MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                              // The codec config data was pulled out and fed to the muxer when we got
                              // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.
                              Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
                              info.size = 0;
                          }

                          if (info.size != 0) {
                              if (!mMuxerStarted) {
                                  throw new RuntimeException("muxer hasn&#39;t started");
                              }

                              // adjust the ByteBuffer values to match BufferInfo (not needed?)
                              encodedData.position(info.offset);
                              encodedData.limit(info.offset + info.size);

                              muxer.writeSampleData(mTrackIndex, encodedData, info);
                              Log.d(TAG, "sent " + info.size + " bytes to muxer");
                          }

                          encoder.releaseOutputBuffer(outputBufferIndex, false);

                          inputBuffers[inputBufferIndex].clear();
                          outputBuffers[outputBufferIndex].clear();

                          if ((info.flags &amp; MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                              break;      // out of while
                          }

                      } else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                          // Subsequent data will conform to new format.
                          MediaFormat opmediaformat = encoder.getOutputFormat();
                          if (!mMuxerStarted) {
                              mTrackIndex = muxer.addTrack(opmediaformat);
                              muxer.start();
                              mMuxerStarted = true;
                          }
                          Log.i(TAG, "op_buf_format_changed: " + opmediaformat);
                      } else if(outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                          outputBuffers = encoder.getOutputBuffers();
                          Log.d(TAG, "Output Buffer changed " + outputBuffers);
                      } else if(outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
                          // No Data, break out
                          break;
                      } else {
                          // Unexpected State, ignore it
                          Log.d(TAG, "Unexpected State " + outputBufferIndex);
                      }
                   }

               }    
           }

           if (encoder != null) {
               encoder.flush();
               encoder.stop();
               encoder.release();
               encoder = null;
           }

           if (muxer != null) {
               muxer.stop();
               muxer.release();
               muxer = null;
           }

           return true;

       };

       @Override
       protected void onPostExecute(Boolean result) {
           if (result) {
               if (mListener != null)
                   mListener.finished();
           } else {
               if (mListener != null)
                   mListener.errored();
           }
           super.onPostExecute(result);
       }



       byte [] getNV12(int inputWidth, int inputHeight, Bitmap scaled) {
           int [] argb = new int[inputWidth * inputHeight];
           scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
           byte [] yuv = new byte[inputWidth*inputHeight*3/2];
           encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
           scaled.recycle();
           return yuv;
       }


       void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
           final int frameSize = width * height;
           int yIndex = 0;
           int uvIndex = frameSize;
           int a, R, G, B, Y, U, V;
           int index = 0;
           for (int j = 0; j &lt; height; j++) {
               for (int i = 0; i &lt; width; i++) {

                   a = (argb[index] &amp; 0xff000000) >> 24; // a is not used obviously
                   R = (argb[index] &amp; 0xff0000) >> 16;
                   G = (argb[index] &amp; 0xff00) >> 8;
                   B = (argb[index] &amp; 0xff) >> 0;

                   // well known RGB to YVU algorithm
                   Y = ( (  66 * R + 129 * G +  25 * B + 128) >> 8) +  16;
                   V = ( ( -38 * R -  74 * G + 112 * B + 128) >> 8) + 128;
                   U = ( ( 112 * R -  94 * G -  18 * B + 128) >> 8) + 128;

                   yuv420sp[yIndex++] = (byte) ((Y &lt; 0) ? 0 : ((Y > 255) ? 255 : Y));
                   if (j % 2 == 0 &amp;&amp; index % 2 == 0) {
                       yuv420sp[uvIndex++] = (byte)((V&lt;0) ? 0 : ((V > 255) ? 255 : V));
                       yuv420sp[uvIndex++] = (byte)((U&lt;0) ? 0 : ((U > 255) ? 255 : U));
                   }

                   index ++;
               }
           }
       }
    }
    </string></string>

    This has now been tested on 4 of my devices and works fine, is there are way to

    1/ Calculate the MAX_INPUT (to high and on the N7 II it crashes, I Don't want that happening once released)
    2/ Offer an api 16 solution ?
    3/ Do I need stride and stride height ?

    Thanks