Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (107)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (18106)

  • (FFMPEG) avformat_write_header crashes (MSVC2013) (C++) (Qt)

    29 avril 2015, par user3502626

    I just downloaded FFMPEG and now I’m trying to use it in Qt with MSVC2013 compiler.

    To understand how it works, I started reading the documentation and the API.
    According to this figure, I was trying to make a little test with libavformat.

    I did all they said in the demuxing module, then the muxing module. But, my program crashes when I call the avformat_write_header() function.

    I would like to know what I did wrong and if you could help me to understand that.

    In the main :

    av_register_all();

    if(!decode())
       return;

    The decode() methode :

    bool MainWindow::decode()
    {
    AVFormatContext *formatContext = NULL;
    AVPacket packet;

    /**************** muxing varaiables ******************/

    AVFormatContext *muxingContext = avformat_alloc_context();
    AVOutputFormat *outputFormat = NULL;
    AVIOContext *contextIO = NULL;
    AVCodec *codecEncode = avcodec_find_encoder(AV_CODEC_ID_WMAV2);
    AVStream *avStream =  NULL;
    AVCodecContext *codecContext = NULL;


    /******************* demuxing **************************/

    //open a media file
    if(avformat_open_input(&formatContext,"h.mp3",NULL,NULL)!=0)
    {
       qDebug() << "paka ouve fichier";
       return false;
    }

    //function which tries to read and decode a few frames to find missing          
    information.
    if(avformat_find_stream_info(formatContext,NULL)<0)
    {
       qDebug()<<"paka find stream";
       return false;
    }


    /**************** muxing *************************/

    //The oformat field must be set to select the muxer that will be used.
    muxingContext->oformat = outputFormat;

    //Unless the format is of the AVFMT_NOFILE type, the pb field must be set to
    //an opened IO context, either returned from avio_open2() or a custom one.
    if(avio_open2(&contextIO,"out.wma",AVIO_FLAG_WRITE,NULL,NULL)<0)
    {
       qDebug() <<"paka kreye fichier soti";
       return false;
    }
    muxingContext->pb = contextIO;

    //Unless the format is of the AVFMT_NOSTREAMS type, at least
    //one stream must be created with the avformat_new_stream() function.
    avStream = avformat_new_stream(muxingContext,codecEncode);

    //The caller should fill the stream codec context information,
    //such as the codec type, id and other parameters
    //(e.g. width / height, the pixel or sample format, etc.) as known

    codecContext = avStream->codec;
    codecContext->codec_type = AVMEDIA_TYPE_AUDIO;
    codecContext->codec_id = AV_CODEC_ID_WMAV2;
    codecContext->sample_fmt = codecEncode->sample_fmts[0];
    codecContext->bit_rate = 128000;
    codecContext->sample_rate = 44000;
    codecContext->channels = 2;

    //The stream timebase should be set to the timebase that the caller desires
    //to use for this stream (note that the timebase actually used by the muxer
    //can be different, as will be described later).

    avStream->time_base = formatContext->streams[0]->time_base;
    qDebug()<streams[0]->time_base.num <<"/"
    <streams[0]->time_base.den;


    //When the muxing context is fully set up, the caller must call    
    //avformat_write_header()
    //to initialize the muxer internals and write the file header

    qDebug() << "does not crash yet";
    if(avformat_write_header(muxingContext,NULL) <0)
    {
       qDebug()<<"cannot write header";
       return false;
    }
    qDebug() << "OOps you can't see me (John Cena)";

    ///////////////////// Reading from an opened file //////////////////////////
    while(av_read_frame(formatContext,&packet)==0)
    {
       //The data is then sent to the muxer by repeatedly calling
       //av_write_frame() or av_interleaved_write_frame()
       if(av_write_frame(muxingContext,&packet)<0)
           qDebug()<<"paka write frame";
       else
           qDebug()<<"writing";
    }

    //Once all the data has been written, the caller must call
    //av_write_trailer() to flush any buffered packets and finalize
    //the output file, then close the IO context (if any) and finally
    //free the muxing context with avformat_free_context().

    if(av_write_trailer(muxingContext)!=0)
    {
       qDebug()<<"paka ekri trailer";
       return false;
    }


    return true;
    }

    The program shows the message does not crash yet. But not OOps you can’t see me (John Cena)

    And there is no error. I used an MP3 file as input and I would like to ouput it in WMA.

  • How to save/encode recorded raw PCM Data as AAC/MP4 format file in Android

    28 janvier 2015, par INVISIBLE

    i want to save recorder pcm data as aac/mp4 format file.
    i am using AudioRecord class for recording audio in android. i have success fully saved it as wave file by adding a wave header to raw data. but dont know how to save it as aac/mp4 file, because aac/mp4 format is compressed then wave.
    the methods i am using for saving pcm data as wave is pasted below.

    recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
               SavedSampleRate, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING,
               bufferSize);
    recorder.startRecording();
    isRecording = true;

    isRecording = true;

    recordingThread = new Thread(new Runnable() {
       @Override
       public void run() {
           writeAudioDataToFile();
       }
    }, "AudioRecorder Thread");

    recordingThread.start();

    second

    private void writeAudioDataToFile() {

       byte data[] = new byte[bufferSize];
       // short sData[] = new short[bufferSize];
       String filename = getTempFilename();
       FileOutputStream os = null;

       try {
           os = new FileOutputStream(filename);
       } catch (Exception e) {
           e.printStackTrace();
       }

       int read = 0;

       if (null != os) {
           while (isRecording) {
               double sum = 0;
               read = recorder.read(data, 0, bufferSize);

               if (AudioRecord.ERROR_INVALID_OPERATION != read) {
                   try {

                       synchronized (this) {


                           // Necessary in order to convert negative shorts!
                           final int USHORT_MASK = (1 << 16) - 1;

                           final ByteBuffer buf = ByteBuffer.wrap(data).order(
                                   ByteOrder.LITTLE_ENDIAN);

                           final ByteBuffer newBuf = ByteBuffer.allocate(
                                   data.length).order(ByteOrder.LITTLE_ENDIAN);

                           int sample;

                           while (buf.hasRemaining()) {



                               short shortSample = buf.getShort();
                               sample = (int) shortSample & USHORT_MASK;



                               sample = sample * db_value_global;
                               sample = mRmsFilterSetting.filter
                                       .apply((((int) 0) | shortSample)
                                               * db_value_global);



                               newBuf.putShort((short) sample);
                           }

                           data = newBuf.array();

                           os.write(data);





                       }

                   } catch (Exception e) {
                       e.printStackTrace();
                   }
               }
           }

           try {
               os.close();
           } catch (Exception e) {
               e.printStackTrace();
           }
       }
    }

    and finally saving it as

    private void copyWaveFile(ArrayList<string> inFilename, String outFilename) {
       FileInputStream[] in = null;
       FileOutputStream out = null;
       long totalAudioLen = 0;
       long totalDataLen = totalAudioLen + 36;
       long longSampleRate = SavedSampleRate;
       int channels = 2;
       long byteRate = RECORDER_BPP * SavedSampleRate * channels / 8;

       byte[] data = new byte[bufferSize];

       try {
           out = new FileOutputStream(outFilename);

           in = new FileInputStream[inFilename.size()];

           for (int i = 0; i &lt; in.length; i++) {
               in[i] = new FileInputStream(inFilename.get(i));
               totalAudioLen += in[i].getChannel().size();
           }

           totalDataLen = totalAudioLen + 36;

           WriteWaveFileHeader(out, totalAudioLen, totalDataLen,
                   longSampleRate, channels, byteRate);

           for (int i = 0; i &lt; in.length; i++) {
               while (in[i].read(data) != -1) {
                   out.write(data);
               }

               in[i].close();
           }

           out.close();
       } catch (Exception e) {
           e.printStackTrace();
       }
    }



    private void WriteWaveFileHeader(FileOutputStream out, long totalAudioLen,
           long totalDataLen, long longSampleRate, int channels, long byteRate)
           throws IOException {

       byte[] header = new byte[44];

       header[0] = 'R'; // RIFF/WAVE header
       header[1] = 'I';
       header[2] = 'F';
       header[3] = 'F';
       header[4] = (byte) (totalDataLen &amp; 0xff);
       header[5] = (byte) ((totalDataLen >> 8) &amp; 0xff);
       header[6] = (byte) ((totalDataLen >> 16) &amp; 0xff);
       header[7] = (byte) ((totalDataLen >> 24) &amp; 0xff);
       header[8] = 'W';
       header[9] = 'A';
       header[10] = 'V';
       header[11] = 'E';
       header[12] = 'f'; // 'fmt ' chunk
       header[13] = 'm';
       header[14] = 't';
       header[15] = ' ';
       header[16] = 16; // 4 bytes: size of 'fmt ' chunk
       header[17] = 0;
       header[18] = 0;
       header[19] = 0;
       header[20] = 1; // format = 1
       header[21] = 0;
       header[22] = (byte) channels;
       header[23] = 0;
       header[24] = (byte) (longSampleRate &amp; 0xff);
       header[25] = (byte) ((longSampleRate >> 8) &amp; 0xff);
       header[26] = (byte) ((longSampleRate >> 16) &amp; 0xff);
       header[27] = (byte) ((longSampleRate >> 24) &amp; 0xff);
       header[28] = (byte) (byteRate &amp; 0xff);
       header[29] = (byte) ((byteRate >> 8) &amp; 0xff);
       header[30] = (byte) ((byteRate >> 16) &amp; 0xff);
       header[31] = (byte) ((byteRate >> 24) &amp; 0xff);
       header[32] = (byte) (2 * 16 / 8); // block align
       header[33] = 0;
       header[34] = RECORDER_BPP; // bits per sample
       header[35] = 0;
       header[36] = 'd';
       header[37] = 'a';
       header[38] = 't';
       header[39] = 'a';
       header[40] = (byte) (totalAudioLen &amp; 0xff);
       header[41] = (byte) ((totalAudioLen >> 8) &amp; 0xff);
       header[42] = (byte) ((totalAudioLen >> 16) &amp; 0xff);
       header[43] = (byte) ((totalAudioLen >> 24) &amp; 0xff);

       out.write(header, 0, 44);
    }
    </string>

    in this piece of code i am recording small PCM files with AudioRecord and saving them as wave file by adding wave header.

    is there any step by step tutorial for how to save pcm data as mp4/aac file.

    thanks in advance.

  • Decoding pcm_s16le with FFMPEG ?

    9 juin, par Davide Caresia

    i have a problem decoding a wav file using ffmpeg. I'm new to it and i'm not quite used to it.

    &#xA;

    In my application i have to input the audio file and get an array of samples to work on.&#xA;I used ffmpeg to create a function that gets in input the path of the file, the position in time where to start to output the samples and the lenght of the chunk to decode in seconds.

    &#xA;

    When I try to decode the file harp.wav everything runs fine, and I can plot the samples as in the image plot-harp.png

    &#xA;

    The file is a WAV file encoded as : pcm_u8, 11025 Hz, 1 channels, u8, 88 kb/s

    &#xA;

    The problems comes when i try to decode the file demo-unprocessed.wav.&#xA;It outputs a series of samples that has no sense. It outputs a serie of samples plotted as the image graph1-demo.jpg shows.

    &#xA;

    The file is a WAV file encoded as : pcm_s16le, 44100 Hz, 1 channels, s16, 705 kb/s

    &#xA;

    IDK where the problem in my code is, I already checked the code before and after the decoding with FFMPEG, and it works absolutely fine.

    &#xA;

    Here is the code for the dataReader.cpp :

    &#xA;

    /* Start by including the necessary */&#xA;#include "dataReader.h"&#xA;#include <cstdlib>&#xA;#include <iostream>&#xA;#include <fstream>&#xA;&#xA;#ifdef __cplusplus&#xA;extern "C" {&#xA;#endif&#xA;    #include <libavcodec></libavcodec>avcodec.h> &#xA;    #include <libavformat></libavformat>avformat.h>&#xA;    #include <libavutil></libavutil>avutil.h>&#xA;#ifdef __cplusplus &#xA;}&#xA;#endif&#xA;&#xA;using namespace std;&#xA;&#xA;/* initialization function for audioChunk */&#xA;audioChunk::audioChunk(){&#xA;    data=NULL;&#xA;    size=0;&#xA;    bitrate=0;&#xA;}&#xA;&#xA;/* function to get back chunk lenght in seconds */&#xA;int audioChunk::getTimeLenght(){&#xA;    return size/bitrate;&#xA;}&#xA;&#xA;/* initialization function for audioChunk_dNorm */&#xA;audioChunk_dNorm::audioChunk_dNorm(){&#xA;    data=NULL;&#xA;    size=0;&#xA;    bitrate=0;&#xA;}&#xA;&#xA;/* function to get back chunk lenght in seconds */&#xA;int audioChunk_dNorm::getTimeLenght(){&#xA;    return size/bitrate;&#xA;}&#xA;&#xA;/* function to normalize audioChunk into audioChunk_dNorm */&#xA;void audioChunk_dNorm::fillAudioChunk(audioChunk* cnk){&#xA;&#xA;    size=cnk->size;&#xA;    bitrate=cnk->bitrate;&#xA;&#xA;    double min=cnk->data[0];&#xA;    double max=cnk->data[0];&#xA;&#xA;    for(int i=0;isize;i&#x2B;&#x2B;){&#xA;        if(*(cnk->data&#x2B;i)>max) max=*(cnk->data&#x2B;i);&#xA;        else if(*(cnk->data&#x2B;i)data&#x2B;i);&#xA;    }&#xA;&#xA;    data=new double[size];&#xA;&#xA;    for(int i=0;i/data[i]=cnk->data[i]&#x2B;256*data[i&#x2B;1];&#xA;        if(data[i]!=255) data[i]=2*((cnk->data[i])-(max-min)/2)/(max-min);&#xA;        else data[i]=0;&#xA;    }&#xA;    cout&lt;&lt;"bitrate "&lt;* inizialize audioChunk */&#xA;    audioChunk output;&#xA;&#xA;    /* Check input times */&#xA;    if((start_time&lt;0)||(lenght&lt;0)) {&#xA;        cout&lt;&lt;"Input times should be positive";&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Start FFmpeg */&#xA;    av_register_all();&#xA;&#xA;    /* Initialize the frame to read the data and verify memory allocation */&#xA;    AVFrame* frame = av_frame_alloc();&#xA;    if (!frame)&#xA;    {&#xA;        cout &lt;&lt; "Error allocating the frame" &lt;&lt; endl;&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Initialization of the Context, to open the file */&#xA;    AVFormatContext* formatContext = NULL;&#xA;    /* Opening the file, and check if it has opened */&#xA;    if (avformat_open_input(&amp;formatContext, path_name, NULL, NULL) != 0)&#xA;    {&#xA;        av_frame_free(&amp;frame);&#xA;        cout &lt;&lt; "Error opening the file" &lt;&lt; endl;&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Find the stream info, if not found, exit */&#xA;    if (avformat_find_stream_info(formatContext, NULL) &lt; 0)&#xA;    {&#xA;        av_frame_free(&amp;frame);&#xA;        avformat_close_input(&amp;formatContext);&#xA;        cout &lt;&lt; "Error finding the stream info" &lt;&lt; endl;&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Check inputs to verify time input */&#xA;    if(start_time>(formatContext->duration/1000000)){&#xA;        cout&lt;&lt; "Error, start_time is over file duration"&lt;* Chunk = number of samples to output */&#xA;    long long int chunk = ((formatContext->bit_rate)*lenght/8);&#xA;    /* Start = address of sample where start to read */&#xA;    long long int start = ((formatContext->bit_rate)*start_time/8);&#xA;    /* Tot_sampl = number of the samples in the file */&#xA;    long long int tot_sampl = (formatContext->bit_rate)*(formatContext->duration)/8000000;&#xA;&#xA;    /* Set the lenght of chunk to avoid segfault and to read all the file */&#xA;    if (start&#x2B;chunk>tot_sampl) {chunk = tot_sampl-start;}&#xA;    if (lenght==0) {start = 0; chunk = tot_sampl;}&#xA;&#xA;    /* initialize the array to output */&#xA;    output.data = new unsigned char[chunk];&#xA;    output.bitrate = formatContext->bit_rate;&#xA;    output.size=chunk;&#xA;&#xA;    av_dump_format(formatContext,0,NULL,0);&#xA;    cout&lt;* Find the audio Stream, if no audio stream are found, clean and exit */&#xA;    AVCodec* cdc = NULL;&#xA;    int streamIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_AUDIO, -1, -1, &amp;cdc, 0);&#xA;    if (streamIndex &lt; 0)&#xA;    {&#xA;        av_frame_free(&amp;frame);&#xA;        avformat_close_input(&amp;formatContext);&#xA;        cout &lt;&lt; "Could not find any audio stream in the file" &lt;&lt; endl;&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Open the audio stream to read data  in audioStream */&#xA;    AVStream* audioStream = formatContext->streams[streamIndex];&#xA;&#xA;    /* Initialize the codec context */&#xA;    AVCodecContext* codecContext = audioStream->codec;&#xA;    codecContext->codec = cdc;&#xA;    /* Open the codec, and verify if it has opened */&#xA;    if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)&#xA;    {&#xA;        av_frame_free(&amp;frame);&#xA;        avformat_close_input(&amp;formatContext);&#xA;        cout &lt;&lt; "Couldn&#x27;t open the context with the decoder" &lt;&lt; endl;&#xA;        return output;&#xA;    }&#xA;&#xA;    /* Initialize buffer to store compressed packets */&#xA;    AVPacket readingPacket;&#xA;    av_init_packet(&amp;readingPacket);&#xA;&#xA;&#xA;    int j=0;&#xA;    int count = 0; &#xA;&#xA;    while(av_read_frame(formatContext, &amp;readingPacket)==0){&#xA;        if((count&#x2B;readingPacket.size)>start){&#xA;            if(readingPacket.stream_index == audioStream->index){&#xA;&#xA;                AVPacket decodingPacket = readingPacket;&#xA;&#xA;                // Audio packets can have multiple audio frames in a single packet&#xA;                while (decodingPacket.size > 0){&#xA;                    // Try to decode the packet into a frame&#xA;                    // Some frames rely on multiple packets, so we have to make sure the frame is finished before&#xA;                    // we can use it&#xA;                    int gotFrame = 0;&#xA;                    int result = avcodec_decode_audio4(codecContext, frame, &amp;gotFrame, &amp;decodingPacket);&#xA;&#xA;                    count &#x2B;= result;&#xA;&#xA;                    if (result >= 0 &amp;&amp; gotFrame)&#xA;                    {&#xA;                        decodingPacket.size -= result;&#xA;                        decodingPacket.data &#x2B;= result;&#xA;                        int a;&#xA;&#xA;                        for(int i=0;idata[0][i];&#xA;&#xA;                            j&#x2B;&#x2B;;&#xA;                            if(j>=chunk) break;&#xA;                        }&#xA;&#xA;                        // We now have a fully decoded audio frame&#xA;                    }&#xA;                    else&#xA;                    {&#xA;                        decodingPacket.size = 0;&#xA;                        decodingPacket.data = NULL;&#xA;                    }&#xA;                    if(j>=chunk) break;&#xA;                }&#xA;            }              &#xA;        }else count&#x2B;=readingPacket.size;&#xA;&#xA;        // To prevent memory leak, must free packet.&#xA;        av_free_packet(&amp;readingPacket);&#xA;        if(j>=chunk) break;&#xA;    }&#xA;&#xA;    // Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag&#xA;    // is set, there can be buffered up frames that need to be flushed, so we&#x27;ll do that&#xA;    if (codecContext->codec->capabilities &amp; CODEC_CAP_DELAY)&#xA;    {&#xA;        av_init_packet(&amp;readingPacket);&#xA;        // Decode all the remaining frames in the buffer, until the end is reached&#xA;        int gotFrame = 0;&#xA;        int a;&#xA;        int result=avcodec_decode_audio4(codecContext, frame, &amp;gotFrame, &amp;readingPacket);&#xA;        while (result >= 0 &amp;&amp; gotFrame)&#xA;        {&#xA;            // We now have a fully decoded audio frame&#xA;            for(int i=0;idata[0][i];&#xA;&#xA;                j&#x2B;&#x2B;;&#xA;                if(j>=chunk) break;&#xA;            }&#xA;            if(j>=chunk) break;&#xA;        }&#xA;    }&#xA;&#xA;    // Clean up!&#xA;    av_free(frame);&#xA;    avcodec_close(codecContext);&#xA;    avformat_close_input(&amp;formatContext);&#xA;&#xA;    cout&lt;&lt;"Ended Reading, "&lt;code></fstream></iostream></cstdlib>

    &#xA;

    Here is the dataReader.h

    &#xA;

    /* &#xA; * File:   dataReader.h&#xA; * Author: davide&#xA; *&#xA; * Created on 27 luglio 2015, 11.11&#xA; */&#xA;&#xA;#ifndef DATAREADER_H&#xA;#define DATAREADER_H&#xA;&#xA;/* function that reads a file and outputs an array of samples&#xA; * @ path_name = the path of the file to read&#xA; * @ start_time = the position where to start the data reading, 0 = start&#xA; *                the time is in seconds, it can hold to 10e-6 seconds&#xA; * @ lenght = the lenght of the frame to extract the data, &#xA; *            0 = read all the file (do not use with big files)&#xA; *            if lenght > of file duration, it reads through the end of file.&#xA; *            the time is in seconds, it can hold to 10e-6 seconds  &#xA; */&#xA;&#xA;#include &#xA;&#xA;class audioChunk{&#xA;public:&#xA;    uint8_t *data;&#xA;    unsigned int size;&#xA;    int bitrate;&#xA;    int getTimeLenght();&#xA;    audioChunk();&#xA;};&#xA;&#xA;class audioChunk_dNorm{&#xA;public:&#xA;    double* data;&#xA;    unsigned int size;&#xA;    int bitrate;&#xA;    int getTimeLenght();&#xA;    void fillAudioChunk(audioChunk* cnk);&#xA;    audioChunk_dNorm();&#xA;};&#xA;&#xA;audioChunk readData(const char* path_name, const double start_time, const double lenght);&#xA;&#xA;#endif  /* DATAREADER_H */&#xA;

    &#xA;

    And finally there is the main.cpp of the application.

    &#xA;

    /* &#xA; * File:   main.cpp&#xA; * Author: davide&#xA; *&#xA; * Created on 28 luglio 2015, 17.04&#xA; */&#xA;&#xA;#include <cstdlib>&#xA;#include "dataReader.h"&#xA;#include "transforms.h"&#xA;#include "tognuplot.h"&#xA;#include <fstream>&#xA;#include <iostream>&#xA;&#xA;using namespace std;&#xA;&#xA;/*&#xA; * &#xA; */&#xA;int main(int argc, char** argv) {&#xA;&#xA;    audioChunk *chunk1=new audioChunk;&#xA;&#xA;    audioChunk_dNorm *normChunk1=new audioChunk_dNorm;&#xA;&#xA;    *chunk1=readData("./audio/demo-unprocessed.wav",0,1);&#xA;&#xA;    normChunk1->fillAudioChunk(chunk1);&#xA;&#xA;    ofstream file1;&#xA;    file1.open("./file/2wave.txt", std::ofstream::trunc);&#xA;    if(file1.is_open()) {&#xA;        for(int i=0;isize;i&#x2B;&#x2B;) {&#xA;            int a=chunk1->data[i];&#xA;            file1&lt;code></iostream></fstream></cstdlib>

    &#xA;

    I can't understand why the outputs goes like this. Is it possible that the decoder can't convert the samples (pcm_16le, 16bits) into FFMPEG AVFrame.data, that stores the samples ad uint8_t ? And if it is it is there some way to make FFMPEG work for audio files that stores samples at more than 8 bits ?

    &#xA;

    The file graph1-demo_good.jpg is how the samples should be, extracted with a working LIBSNDFILE application that I made.

    &#xA;

    EDIT : Seems like the program can't convert the decoded data, couples of little endian bytes stored in a couple of uint8_t unsigned char, into the destination format (that i set as unsigned char[]), because it stores the bits as little-endian 16 bytes. So the data into audioChunk.data is right, but I have to read it not as an unsigned char, but as a couple of little-endian bytes.

    &#xA;