Recherche avancée

Médias (91)

Autres articles (62)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (13123)

  • hls.js starting a beginning with ANDROID mobile (chrome, webview also) and not live *** but works very nice in deskto, ios .. hls.js 1.0.0 2021-04-01

    27 avril 2021, par Jintor

    I'm streaming a .m3u8 with the latest hls.js 1.0.0 (not rc) but version of 2021-04-01...

    


    example : the stream began at 5pm, and now it's 5:15 pm...

    


    the stream start at live point in almost all browsers

    


    The pattern I see here : ALL browsers in android (tested in Android 10) won't start at live point, only at 0...

    


    I did all the tests

    


    • Safari desktop => stream live at 5:15

    


    • Safari mobile => stream live at 5:15

    


    • WebView (Android) => ••• ISSUE : the player starts the stream at 0 (5pm)

    


    • WKWebView (apple IOS iphone,ipad) => stream live at 5:15

    


    • Chrome Desktop (mac/win) => stream live at 5:15

    


    • Chrome MOBILE (Android) => ••• ISSUE : the player starts the stream at 0 (5pm)

    


    • Chrome MOBILE (iPhone) => stream live at 5:15

    


    • Microsoft EDGE Desktop => stream live at 5:15

    


    • Microsoft EDGE mobile (android) => ••• ISSUE : the player starts the stream at 0 (5pm)

    


    • Firefox Desktop (mac/win) => stream live at 5:15

    


    • Opera Desktop (mac/win) => stream live at 5:15

    


    • Opera Mini (iPhone) => stream live at 5:15

    


    • Opera Mini (android) => ••• ISSUE : the player starts the stream at 0 (5pm)

    


    • Brave Desktop (mac/win) => stream live at 5:15

    


    • Brave Mobile (iPhone) => stream live at 5:15

    


    • Brave Mobile (android) => ••• ISSUE : the player starts the stream at 0 (5pm)

    


    This code

    


    <code class="echappe-js">&lt;script src=&quot;https://cdn.jsdelivr.net/npm/hls.js@latest&quot;&gt;&lt;/script&gt;&#xA;    

    &#xA; &lt;script&gt;&amp;#xA;      var video = document.getElementById(&quot;video&quot;);&amp;#xA;      var videoSrc = &quot;https://www.example1.com/streaming/index.m3u8&quot;;&amp;#xA;      if (video.canPlayType(&quot;application/vnd.apple.mpegurl&quot;)) {&amp;#xA;        video.src = videoSrc;&amp;#xA;      } else if (Hls.isSupported()) {&amp;#xA;         var config = {&amp;#xA;            autoStartLoad: true,&amp;#xA;            startPosition: -1,&amp;#xA;            debug: false,&amp;#xA;            capLevelOnFPSDrop: false,&amp;#xA;            capLevelToPlayerSize: false,&amp;#xA;            defaultAudioCodec: undefined,&amp;#xA;            initialLiveManifestSize: 1,&amp;#xA;            maxBufferLength: 30,&amp;#xA;            maxMaxBufferLength: 500,&amp;#xA;            backBufferLength: Infinity,&amp;#xA;            maxBufferSize: 60 * 1000 * 1000,&amp;#xA;            maxBufferHole: 0.5,&amp;#xA;            highBufferWatchdogPeriod: 2,&amp;#xA;            nudgeOffset: 0.1,&amp;#xA;            nudgeMaxRetry: 3,&amp;#xA;            maxFragLookUpTolerance: 0.25,&amp;#xA;            liveSyncDurationCount: 3,&amp;#xA;            liveMaxLatencyDurationCount: Infinity,&amp;#xA;            liveDurationInfinity: false,&amp;#xA;            enableWorker: true,&amp;#xA;            enableSoftwareAES: true,&amp;#xA;            manifestLoadingTimeOut: 10000,&amp;#xA;            manifestLoadingMaxRetry: 1,&amp;#xA;            manifestLoadingRetryDelay: 1000,&amp;#xA;            manifestLoadingMaxRetryTimeout: 64000,&amp;#xA;            startLevel: undefined,&amp;#xA;            levelLoadingTimeOut: 10000,&amp;#xA;            levelLoadingMaxRetry: 4,&amp;#xA;            levelLoadingRetryDelay: 1000,&amp;#xA;            levelLoadingMaxRetryTimeout: 64000,&amp;#xA;            fragLoadingTimeOut: 20000,&amp;#xA;            fragLoadingMaxRetry: 6,&amp;#xA;            fragLoadingRetryDelay: 1000,&amp;#xA;            fragLoadingMaxRetryTimeout: 64000,&amp;#xA;            startFragPrefetch: false,&amp;#xA;            testBandwidth: true,&amp;#xA;            progressive: false,&amp;#xA;            lowLatencyMode: true,&amp;#xA;            fpsDroppedMonitoringPeriod: 5000,&amp;#xA;            fpsDroppedMonitoringThreshold: 0.2,&amp;#xA;            appendErrorMaxRetry: 3,&amp;#xA;            enableWebVTT: true,&amp;#xA;            enableIMSC1: true,&amp;#xA;            enableCEA708Captions: true,&amp;#xA;            stretchShortVideoTrack: false,&amp;#xA;            maxAudioFramesDrift: 1,&amp;#xA;            forceKeyFrameOnDiscontinuity: true,&amp;#xA;            abrEwmaFastLive: 3.0,&amp;#xA;            abrEwmaSlowLive: 9.0,&amp;#xA;            abrEwmaFastVoD: 3.0,&amp;#xA;            abrEwmaSlowVoD: 9.0,&amp;#xA;            abrEwmaDefaultEstimate: 500000,&amp;#xA;            abrBandWidthFactor: 0.95,&amp;#xA;            abrBandWidthUpFactor: 0.7,&amp;#xA;            abrMaxWithRealBitrate: false,&amp;#xA;            maxStarvationDelay: 4,&amp;#xA;            maxLoadingDelay: 4,&amp;#xA;            minAutoBitrate: 0,&amp;#xA;            emeEnabled: false&amp;#xA;        };&amp;#xA;        var hls = new Hls(config);&amp;#xA;        hls.loadSource(videoSrc);&amp;#xA;        hls.attachMedia(video);&amp;#xA;      }   &amp;#xA;      video.addEventListener(&quot;loadedmetadata&quot;, function(){ video.muted = true; video.play(); }, false);&amp;#xA;    &lt;/script&gt;&#xA;

    &#xA;

    // here I added video.muted = true ; video.play() ; to auto start, if I try to autoplay unmuted, many browsers refuse this command...

    &#xA;

    // playsinline="true" is NEEDED for safari

    &#xA;

    ••••••• THE FFMPEG COMMAND (working : it allows me to have 3 to 4 seconds delay ••••••

    &#xA;

    ffmpeg -re -i input.x -c:a aac -c:v libx264 &#xA;-movflags &#x2B;dash -preset ultrafast &#xA;-crf 28 -refs 4 -qmin 4 -pix_fmt yuv420p &#xA;-tune zerolatency -c:a aac -ac 2 -profile:v main &#xA;-flags -global_header -bufsize 969k &#xA;-hls_time 1 -hls_list_size 0 -g 30 &#xA;-start_number 0 -streaming 1 -hls_playlist 1 &#xA;-lhls 1 -hls_playlist_type event -f hls path_to_index.m3u8&#xA;

    &#xA;

    •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

    &#xA;

    How can this be fixed ?

    &#xA;

    How can I make play at live point on load in android MOBILE ?

    &#xA;

  • FFMPEG command from Python 3.5 does not actually create audio file

    20 décembre 2017, par Nathan Blaine

    I have a Django web application that accepts user uploaded videos/audio and saves them into a folder ’../WebAppDirectory/media/recordings’.

    I am then using a speech to text API to get a rough transcription of the audio. This is working fine for .wav and .mp4 files, but the web app also accepts videos (.MOV) that I would like to first convert to .wav, then pass off to the API.

    Using ffmpeg from my command line like this

    ffmpeg -i C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\media\recordings\upload_sample.MOV -ab 160k -ac 2 -ar 44100 -vn upload_sample.wav

    Correctly creates the .wav file from the original .MOV.

    However, when I run this from python with

    subprocess.check_call(command, shell=True)

    ffmpeg responds with

    File ’upload_sample.wav’ already exists. Overwrite ? [y/N]

    While Python tells me

    FileNotFoundError : [Errno 2] No such file or directory : ’C :\Users\Nathan\Desktop\MeetingRecorderWebAPP\media\recordings\upload_sample.wav’

    It is also worth noting that I do not see a ’upload_sample.wav’ file in the media/recordings/ directory.

    This leads me to believe that maybe Python and ffmpeg are looking in different folders, but I am not sure where I am going wrong. When I print the command from the subprocess.check_call and copy/paste it into cmd, the file is created as expected.

    Hoping someone with some experience with ffmpeg/Python subprocess can help shed some light ! Here are the files I am working with :

    Folder Structure

    DjangoWebApp
    |---media
    |---|---imgs
    |---|---recordings
    |---|---|---upload_sample.MOV
    |---uploaded_audio_to_text.py

    uploaded_audio_to_text.py

    import speech_recognition as sr
    from os import path
    import os
    import subprocess


    def speech_to_text(file_name):
       AUDIO_FILE = path.join(path.dirname(path.realpath(__file__)), 'media','recordings', file_name)
       print("Looking at path: ",AUDIO_FILE)
       # get extension
       AUDIO_FILE_EXT = os.path.splitext(AUDIO_FILE)[1]

       if(AUDIO_FILE_EXT == '.MOV'):
           print("File is not .wav: ", AUDIO_FILE_EXT, "found. Converting...")
           # We will use subprocess and ffmpeg to convert this .MOV file to .wav, so we can send to API
           temp_wav = os.path.splitext(file_name)[0] + '.wav'
           print("New audio file will be: ", temp_wav)
           # build CMD ffmpeg command
           command = "ffmpeg -i "
           command += AUDIO_FILE
           command += " -ab 160k -ac 2 -ar 44100 -vn "
           command += temp_wav

           print("Attempting to run this command: \n",command)
           print(subprocess.check_call(command, shell=True))
           print("Past Subprocess.call")
           AUDIO_FILE = path.join(path.dirname(path.realpath(__file__)), 'media','recordings', temp_wav)
           print("AUDIO_FILE now set to: ", AUDIO_FILE)

       else:
           # continue with what we are doing
           pass


       r = sr.Recognizer()
       with sr.AudioFile(AUDIO_FILE) as source:
           audio = r.record(source)  # read the entire audio file
           text_transcription = "Sentinel"
           # recognize speech using Microsoft Bing Voice Recognition
           BING_KEY = "MY_KEY_:)"
           try:
               text_transcription = r.recognize_bing(audio, key=BING_KEY)
           except sr.UnknownValueError:
               print("Microsoft Bing Voice Recognition could not understand audio")
           except sr.RequestError as e:
               print("Could not request results from Microsoft Bing Voice Recognition service; {0}".format(e))

       return text_transcription


    #my tests
    my_relative_file_path = "upload_sample.MOV"
    print(speech_to_text(my_relative_file_path))

    Console output (traceback and my print()’s)

    Looking at path:  C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\media\recordings\upload_sample.MOV
    File is not .wav:  .MOV found. Converting...
    New audio file will be:  upload_sample.wav Attempting to run this command:
    ffmpeg -i C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\media\recordings\upload_sample.MOV -ab 160k -ac 2 -ar 44100 -vn upload_sample.wav
    ffmpeg version git-2017-12-18-74f408c Copyright (c) 2000-2017 the FFmpeg developers   built with gcc 7.2.0 (GCC)  
    ----REMOVED SOME FFMPEG OUTPUT FOR BREVITY----
    File 'upload_sample.wav' already exists. Overwrite ? [y/N] y
    Stream mapping:   Stream #0:1 -> #0:0 (aac (native) -> pcm_s16le (native)) Press [q] to stop, [?] for help Output #0, wav, to 'upload_sample.wav':   Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       com.apple.quicktime.creationdate: 2017-12-19T16:06:10-0500
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 6
       com.apple.quicktime.software: 10.3.3
       ISFT            : Lavf58.3.100
       Stream #0:0(und): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s (default)
       Metadata:
         creation_time   : 2017-12-19T21:06:11.000000Z
         handler_name    : Core Media Data Handler
         encoder         : Lavc58.8.100 pcm_s16le size=    1036kB time=00:00:06.01 bitrate=1411.3kbits/s speed=N/A     video:0kB audio:1036kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.007352%
    0
    Traceback (most recent call last): Past Subprocess.call  
    File "C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\uploaded_audio_to_text.py", line 53, in <module>
    AUDIO_FILE now set to:  C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\media\recordings\upload_sample.wav
       print(speech_to_text(my_relative_file_path))  
    File "C:\Users\Nathan\Desktop\MeetingRecorderWebAPP\uploaded_audio_to_text.py", line 36, in speech_to_text
       with sr.AudioFile(AUDIO_FILE) as source:  
    File "C:\Users\Nathan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\speech_recognition\__init__.py", line 203, in __enter__
       self.audio_reader = wave.open(self.filename_or_fileobject, "rb")  
    File "C:\Users\Nathan\AppData\Local\Programs\Python\Python36-32\lib\wave.py", line 499, in open
       return Wave_read(f)  
    File "C:\Users\Nathan\AppData\Local\Programs\Python\Python36-32\lib\wave.py", line 159, in __init__
       f = builtins.open(f, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Nathan\\Desktop\\MeetingRecorderWebAPP\\media\\recordings\\upload_sample.wav'

    Process finished with exit code 1
    </module>
  • Can I use avcodec_free_context() on an opened context ?

    23 mars 2017, par Ashe the human

    The latest documentation says here that opening a context that’s closed again is not supported any more. I see why. Some codecs don’t work properly when they’re reopened. So after finding this bug, I decided to not use avcodec_close() and call avcodec_free_context() on the contexts right away instead.

    But I’m not sure if it’s safe to do so with 2.8.4, the version that I linked to my program. The documentation from that time doesn’t clarify. Does anyone know ? At least empirically ?

    ffmpeg version 2.8.4 Copyright (c) 2000-2015 the FFmpeg developers
    built with Microsoft (R) C/C++ 최적화 컴파일러 버전 18.00.31101(x64)
    configuration: --toolchain=msvc --enable-gpl --enable-nonfree --enable-nvenc --enable-libvorbis --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libx265 --enable-libxvid --enable-libopus --enable-libvpx --enable-static --disable-shared --disable-debug --extra-cflags=-MT --extra-cxxflags=-MT --extra-ldflags='/nodefaultlib:msvcrt.lib' --extra-libs='zlib.lib libogg_static.lib libvorbis_static.lib libmpghip-static.lib libmp3lame-static.lib libtheora_static.lib libx264.lib x265-static.lib libxvidcore.lib silk_fixed.lib silk_common.lib silk_float.lib celt.lib opus.lib vpxmt.lib'
    libavutil      54. 31.100 / 54. 31.100
    libavcodec     56. 60.100 / 56. 60.100
    libavformat    56. 40.101 / 56. 40.101
    libavdevice    56.  4.100 / 56.  4.100
    libavfilter     5. 40.101 /  5. 40.101
    libswscale      3.  1.101 /  3.  1.101
    libswresample   1.  2.101 /  1.  2.101
    libpostproc    53.  3.100 / 53.  3.100

    I know there’s a bunch of forums I could post on but I’d felt like to ask it here first.