Recherche avancée

Médias (91)

Autres articles (56)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

Sur d’autres sites (6240)

  • passing script variable of filename with spaces in bash to external program (ffmpeg) fails

    13 janvier 2016, par BostonScott

    Short story : I’m trying to write a script that will use FFmpeg to convert the many files stored in one directory to a "standard" mp4 format and save the converted files in another directory. It’s been a learning experience (a fun one !) since I haven’t done any real coding since using Pascal and FORTRAN on an IBM 370 mainframe was in vogue.

    Essentially the script takes the filename, strips the path and extension off it, reassembles the filename with the path and an mp4 extension and calls FFmpeg with some set parameters to do the conversion. If the directory contains only video files with without spaces in the names, then everything works fine. If the filenames contain spaces, then FFmpeg is not able to process the file and moves on to the next one. The error indicates that FFMpeg is only seeing the filename up to the first space. I’ve included both the script and output below.

    Thanks for any help and suggestions you may have. If you think I should be doing this in another way, please by all means, give me your suggestions. As I said, it’s been a long time since I did anything like this. I’m enjoying it though.

    I’ve include the code first followed by example output.

    for file in ./TBC/*.mp4
       do

       echo "Start of iteration"
       echo "Full text of file name:" $file

       #Remove everything up to  "C/" (filename without path)
       fn_orig=${file#*C/}
       echo "Original file name:" $fn_orig

       #Length of file name
       fn_len=${#fn_orig}
       echo "Filename Length:" $fn_len

       #file name without path or extension
       fn_base=${fn_orig:0:$fn_len-4}
       echo "Base file name:" $fn_base

       #new filename suffix
       newsuffix=".conv.mp4"

       fn_out=./CONV/$fn_base$newsuffix
       echo "Converted file name:" $fn_out

       ffmpeg -i $file -metadata title="$fn_orig" -c:v libx264 -c:a libfdk_aac -b:a 128k $fn_out

       echo "End of iteration"
       echo
       done
    echo "Script completed"

    With the ffmpeg line commented out, and two files in the ./TBC directory, this is the output that I get

       Start of iteration
       Full text of file name: ./TBC/Test file with spaces.mp4
       Original filename: Test file with spaces.mp4
       Filename Length: 25
       Base filename: Test file with spaces
       Converted file name: ./CONV/Test file with spaces.conv.mp4
       End of iteration

       Start of iteration
       Full text of file name: ./TBC/Test_file_with_NO_spaces.mp4
       Original file name: Test_file_with_NO_spaces.mp4
       Filename Length: 28
       Base file name: Test_file_with_NO_spaces
       Converted file name: ./CONV/Test_file_with_NO_spaces.conv.mp4
       End of iteration

       Script completed

    I won’t bother to post the results when ffmpeg is uncommented, other than to state that it fails with the error :
    ./TBC/Test : No such file or directory

    The script then continues to the next file which completes successfully because it has no spaces in its name. The actual filename is "Test file with spaces.mp4" so you can see that ffmpeg stops after the word "Test" when it encounters a space.

    I hope this has been clear and concise and hopefully someone will be able to point me in the right direction. There is a lot more that I want to do with this script such as parsing subdirectories and ignoring non-video files, etc.

    I look forward to any insight you can give !

  • Android + ffmpeg + AudioTrack produces bad audio output

    12 septembre 2014, par Goddchen

    here is what I am trying to do : use an AudioRecord and "pipe" the output of AudioRecord.read(byte[],...) to an ffmpeg process’ stdin that will convert to a 3gp (AAC) file.

    The ffmpeg call is as follows :

           ProcessBuilder processBuilder = new ProcessBuilder(BINARY.getAbsolutePath(),
                   "-y",
                   "-ar", "44100", "-c:a", "pcm_s16le", "-ac", "1","-f","s16le",
                   "-i", "-",
                   "-strict", "-2", "-c:a", "aac",
                   outFile.getAbsolutePath());

    The AudioRecord is setup as follows :

    AudioRecord record = new AudioRecord(/*AudioSource.VOICE_RECOGNITION,*/ AudioSource.MIC,
               SAMPLING_RATE,
               AudioFormat.CHANNEL_IN_MONO,
               AudioFormat.ENCODING_PCM_16BIT,
               bufferSize);

    SAMPLING_RATE = 44100 and bufferSize is the one returned by AudioRecord.getMinBufferSize(...)

    I am writing the data to ffmpeg like this :

    try {
                           IOUtils.write(data, getFFmpegHelper().getCurrentProcessOutputStream());
                       } catch (Exception e) {
                           Log.e(Application.LOG_TAG, "Error writing data to ffmpeg process", e);
                           //TODO notify user, stop the recording, etc...
                       }

    So far so good, the ffmpeg runs and created a proper 3gp file. But the audio in the file is totally off. It seems "choppy" (not sure if this is the correct english word ;) ) and also the pace is wrong, is plays too fast.

    Check out this sample : http://goddchen.de/android/tmp/tmp.3gp

    This is the output of the ffmpeg process :

       [s16le @ 0x23634d0] Estimating duration from bitrate, this may be inaccurate
       Guessed Channel Layout for  Input Stream #0.0 : mono
       Input #0, s16le, from 'pipe:':
       Duration: N/A, start: 0.000000, bitrate: 705 kb/s
       Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
       [aformat @ 0x2363100] auto-inserting filter 'auto-inserted resampler 0' between the filter 'src' and the filter 'aformat'
       [aresample @ 0x235b0a0] chl:mono fmt:s16 r:44100Hz -> chl:mono fmt:flt r:44100Hz
       Output #0, 3gp, to '/data/data/com.test.audio/files/tmp.3gp':
       Metadata:
       encoder         : Lavf54.6.100
       Stream #0:0: Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, flt, 128 kb/s
       Stream mapping:
       Stream #0:0 -> #0:0 (pcm_s16le -> aac)
       size=       3kB time=00:00:00.18 bitrate= 132.5kbits/s    
    size=       8kB time=00:00:00.55 bitrate= 120.9kbits/s    
    size=      12kB time=00:00:00.83 bitrate= 121.8kbits/s    
    size=      16kB time=00:00:01.04 bitrate= 122.8kbits/s    
    size=      20kB time=00:00:01.32 bitrate= 122.5kbits/s    
    size=      23kB time=00:00:01.53 bitrate= 121.6kbits/s    
    size=      27kB time=00:00:01.81 bitrate= 121.0kbits/s    
    size=      31kB time=00:00:02.11 bitrate= 120.7kbits/s    
    size=      35kB time=00:00:02.32 bitrate= 123.4kbits/s
       video:0kB audio:34kB global headers:0kB muxing overhead 3.031610%
  • How to set pts, dts and duration in ffmpeg library ?

    24 mars, par hslee

    I want to pack some compressed video packets(h.264) to ".mp4" container.
One word, Muxing, no decoding and no encoding.
And I have no idea how to set pts, dts and duration.

    



      

    1. I get the packets with "pcap" library.
    2. 


    3. I removed headers before compressed video data show up. e.g. Ethernet, VLAN.
    4. 


    5. I collected data until one frame and decoded it for getting information of data. e.g. width, height. (I am not sure that it is necessary)
    6. 


    7. I initialized output context, stream and codec context.
    8. 


    9. I started to receive packets with "pcap" library again. (now for muxing)
    10. 


    11. I made one frame and put that data in AVPacket structure.
    12. 


    13. I try to set PTS, DTS and duration. (I think here is wrong part, not sure though)
    14. 


    



    *7-1. At the first frame, I saved time(msec) with packet header structure.

    



    *7-2. whenever I made one frame, I set parameters like this : PTS(current time - start time), DTS(same PTS value), duration(current PTS - before PTS)

    



    I think it has some error because :

    



      

    1. I don't know how far is suitable long for dts from pts.

    2. 


    3. At least, I think duration means how long time show this frame from now to next frame, so It should have value(next PTS - current PTS), but I can not know the value next PTS at that time.

    4. 


    



    It has I-frame only.

    



    // make input context for decoding

AVFormatContext *&ic = gInputContext;

ic = avformat_alloc_context();

AVCodec *cd = avcodec_find_decoder(AV_CODEC_ID_H264);

AVStream *st = avformat_new_stream(ic, cd);

AVCodecContext *cc = st->codec;

avcodec_open2(cc, cd, NULL);

// make packet and decode it after collect packets is be one frame

gPacket.stream_index = 0;

gPacket.size    = gPacketLength[0];

gPacket.data    = gPacketData[0];

gPacket.pts     = AV_NOPTS_VALUE;

gPacket.dts     = AV_NOPTS_VALUE;

gPacket.flags   = AV_PKT_FLAG_KEY;

avcodec_decode_video2(cc, gFrame, &got_picture, &gPacket);

// I checked automatically it initialized after "avcodec_decode_video2"

// put some info that I know that not initialized

cc->time_base.den   = 90000;

cc->time_base.num   = 1;

cc->bit_rate    = 2500000;

cc->gop_size    = 1;

// make output context with input context

AVFormatContext *&oc = gOutputContext;

avformat_alloc_output_context2(&oc, NULL, NULL, filename);

AVFormatContext *&ic = gInputContext;

AVStream *ist = ic->streams[0];

AVCodecContext *&icc = ist->codec;

AVStream *ost = avformat_new_stream(oc, icc->codec);

AVCodecContext *occ = ost->codec;

avcodec_copy_context(occ, icc);

occ->flags |= CODEC_FLAG_GLOBAL_HEADER;

avio_open(&(oc->pb), filename, AVIO_FLAG_WRITE);

// repeated part for muxing

AVRational Millisecond = { 1, 1000 };

gPacket.stream_index = 0;

gPacket.data = gPacketData[0];

gPacket.size = gPacketLength[0];

gPacket.pts = av_rescale_rnd(pkthdr->ts.tv_sec * 1000 /

    + pkthdr->ts.tv_usec / 1000 /

    - gStartTime, Millisecond.den, ost->time_base.den, /

    (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));

gPacket.dts = gPacket.pts;

gPacket.duration = gPacket.pts - gPrev;

gPacket.flags = AV_PKT_FLAG_KEY;

gPrev = gPacket.pts;

av_interleaved_write_frame(gOutputContext, &gPacket);


    



    Expected and actual results is a .mp4 video file that can play.