Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (50)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

Sur d’autres sites (8194)

  • French CNIL recommends Piwik : the only analytics tool that does not require Cookie Consent

    29 octobre 2014, par Matthieu Aubry — Press Releases

    There has been recent and important changes in France regarding data privacy and the use of cookies. This blog post will introduce you to these changes and explain how you make your website compliant.

    Cookie Consent in the data freedom law

    Since the adoption of the EU Directive 2009/136/EC “Telecom Package”, Internet users must be informed and provide their prior consent to the storage of cookies on their computer. The use of cookies for advertising, analytics and social share buttons require the user’s consent :

    It is necessary to inform users of the presence, purpose and duration of the cookies placed in their browsers, and the means at their disposal to oppose it.

    What is a cookie ?

    Cookies are tracers placed on Internet users’ hard drives by the web hosts of the visited website. They allow the website to identify a single user across multiple visits with a unique identifier. Cookies may be used for various purposes : building up a shopping cart, storing a website’s language settings, or targeting advertising by monitoring the user’s web-browsing.

    Which cookies are exempt from the Cookie Consent rule ?

    France has exempted certain cookies from the cookie consent rule : for those cookies that are strictly necessary to offer the service sought after by the user you do not need to ask consent to user. Examples of such cookies are :

    • the shopping cart cookie,
    • authentication cookies,
    • short lived session cookies,
    • load balancer cookies,
    • certain first party analytics (such as Piwik cookies),
    • persistent cookies for interface personalisation.

    Asking users for consent for Analytics (tracking) Cookies

    For all cookies that are not exempted from the Cookie Consent then you will need to :

    • obtain consent from web users before placing or reading cookies and similar technologies,
    • clearly inform web users of the different purposes for which the cookies and similar technologies will be used,
    • propose a real choice to web users between accepting or refusing cookies and similar technologies.

    You don’t need Cookie Consent with Piwik

    The excellent news is that there is a way to bypass the Cookie Consent banner on your website :

    If you are using another analytics solution other than Piwik then you will need to ask users for consent. If you do not want to ask for consent then download and install Piwik or signup to Piwik Cloud to get started.

    If you are already using Piwik you need to do two simple things : (1) anonymise visitor IP addresses (at least two bytes) and (2) include the opt-out iframe solution in your website (learn more).

    Note that these recommendations currently only apply in France, but because the law is European we can expect similar findings in other European countries.

    CNIL recommends Piwik

    We are proud that the CNIL has identified Piwik as the only tool that respects all privacy requirements set by the European Telecom law.

    About the CNIL

    The CNIL is an independent administrative body that operates in accordance with the French data protection legislation. The CNIL has been entrusted with the general duty to inform people of the rights that the data protection legislation allows them.

    The role and responsabilities of the CNIL are :

    • to protect citizens and their data
    • to regulate and control processing of personal data
    • to inspect the security of data processing systems and applications, and impose penalties

    Piwik and Privacy

    At Piwik we love Privacy – our open analytics platform comes with built-in Privacy.

    Future of Privacy at Piwik

    Piwik is already the leader when it comes to respecting user privacy but we plan to continue improving privacy within the open analytics platform. For more information and specific ideas see Privacy enhancing issues in our issue tracker.

    References

    Learn more in these articles in French [fr] or English :

    Contact

    To learn more about Piwik, please visit piwik.org,

    Get in touch with the Piwik team : Contact information,

    For professional support contact Piwik PRO.

  • ffmpeg concatenation with -filter_complex

    16 octobre 2018, par Igniter

    I’ve seen several similar questions but none of them actually helped in my case.
    Getting this error while trying to join 1 audio and 4 video files of different nature and resolutions.

    ffmpeg -i 0.mp3 -i 1.mp4 -i 2.mkv -i 3.mkv -i 4.webm \
       -filter_complex [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] \
       -map "[outv]" -map "[outa]" output.mp4

    All this gives the following error :

    Stream specifier ':a:0' in filtergraph description [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] matches no streams.

    Straight concatenation -i "concat:0.mp3|1.mp4..." also doesn’t work as expected due to different resolutions and video formats. All methods syntax was taken from official documentation but there should be something that I’ve missed here.

    Full output log :

    ffmpeg version 3.4.4-0ubuntu0.18.04.1 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
     configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libavresample   3.  7.  0 /  3.  7.  0
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    Input #0, mp3, from 'mp3/10.mp3':
     Metadata:
       album_artist    : artist
       title           : title
       artist          : 10
       album           : 12
       track           : 1
       VideoKind       : 2
       date            : 2009
     Duration: 00:06:00.44, start: 0.025056, bitrate: 64 kb/s
       Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 64 kb/s
       Metadata:
         encoder         : LAME3.98r
       Stream #0:1: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 200x200 [SAR 72:72 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
       Metadata:
         comment         : Cover (front)
    Input #1, matroska,webm, from '1.mp4':
     Metadata:
       MINOR_VERSION   : 0
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       ENCODER         : Lavf57.83.100
     Duration: 00:01:53.05, start: 0.007000, bitrate: 2292 kb/s
       Stream #1:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:01:53.048000000
    Input #2, matroska,webm, from '2.mkv':
     Metadata:
       MINOR_VERSION   : 0
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       ENCODER         : Lavf57.83.100
     Duration: 00:02:08.09, start: 0.007000, bitrate: 1607 kb/s
       Stream #2:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:02:08.090000000
    Input #3, matroska,webm, from '3.mkv':
     Metadata:
       MINOR_VERSION   : 0
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       ENCODER         : Lavf57.83.100
     Duration: 00:01:37.05, start: 0.007000, bitrate: 3525 kb/s
       Stream #3:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:01:37.048000000
    Input #4, matroska,webm, from '4.webm':
     Metadata:
       MINOR_VERSION   : 0
       COMPATIBLE_BRANDS: iso6avc1mp41
       MAJOR_BRAND     : dash
       ENCODER         : Lavf57.83.100
     Duration: 00:01:45.13, start: 0.007000, bitrate: 3685 kb/s
       Stream #4:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
       Metadata:
         HANDLER_NAME    : VideoHandler
         DURATION        : 00:01:45.131000000
    Stream specifier ':a:0' in filtergraph description [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] matches no streams.
  • Problems with Python's azure.cognitiveservices.speech when installing together with FFmpeg in a Linux web app

    15 mai 2024, par Kakobo kakobo

    I need some help.
I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg or when i build and run the container in my machine.

    


    I have placed debug print statements in the code. I can see the class initiating, for some reason it does not buffer in the same when when running it locally in my machine. The routine simply stops without any reason.

    


    Has anybody experienced a similar issue with azure.cognitiveservices.speech conflicting with ffmpeg ?

    


    Here's my Dockerfile :

    


    # Use an official Python runtime as a parent imageFROM python:3.11-slim

#Version RunRUN echo "Version Run 1..."

Install ffmpeg

RUN apt-get update && apt-get install -y ffmpeg && # Ensure ffmpeg is executablechmod a+rx /usr/bin/ffmpeg && # Clean up the apt cache by removing /var/lib/apt/lists saves spaceapt-get clean && rm -rf /var/lib/apt/lists/*

//Set the working directory in the container

WORKDIR /app

//Copy the current directory contents into the container at /app

COPY . /app

//Install any needed packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

//Make port 80 available to the world outside this container

EXPOSE 8000

//Define environment variable

ENV NAME World

//Run main.py when the container launches

CMD ["streamlit", "run", "main.py", "--server.port", "8000", "--server.address", "0.0.0.0"]`and here's my python code:


    


    def transcribe_audio_continuous_old(temp_dir, audio_file, language):
    speech_key = azure_speech_key
    service_region = azure_speech_region

    time.sleep(5)
    print(f"DEBUG TIME BEFORE speechconfig")

    ran = generate_random_string(length=5)
    temp_file = f"transcript_key_{ran}.txt"
    output_text_file = os.path.join(temp_dir, temp_file)
    speech_recognition_language = set_language_to_speech_code(language)
    
    speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
    speech_config.speech_recognition_language = speech_recognition_language
    audio_input = speechsdk.AudioConfig(filename=os.path.join(temp_dir, audio_file))
        
    speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input, language=speech_recognition_language)
    done = False
    transcript_contents = ""

    time.sleep(5)
    print(f"DEBUG TIME AFTER speechconfig")
    print(f"DEBUG FIle about to be passed {audio_file}")

    try:
        with open(output_text_file, "w", encoding=encoding) as file:
            def recognized_callback(evt):
                print("Start continuous recognition callback.")
                print(f"Recognized: {evt.result.text}")
                file.write(evt.result.text + "\n")
                nonlocal transcript_contents
                transcript_contents += evt.result.text + "\n"

            def stop_cb(evt):
                print("Stopping continuous recognition callback.")
                print(f"Event type: {evt}")
                speech_recognizer.stop_continuous_recognition()
                nonlocal done
                done = True
            
            def canceled_cb(evt):
                print(f"Recognition canceled: {evt.reason}")
                if evt.reason == speechsdk.CancellationReason.Error:
                    print(f"Cancellation error: {evt.error_details}")
                nonlocal done
                done = True

            speech_recognizer.recognized.connect(recognized_callback)
            speech_recognizer.session_stopped.connect(stop_cb)
            speech_recognizer.canceled.connect(canceled_cb)

            speech_recognizer.start_continuous_recognition()
            while not done:
                time.sleep(1)
                print("DEBUG LOOPING TRANSCRIPT")

    except Exception as e:
        print(f"An error occurred: {e}")

    print("DEBUG DONE TRANSCRIPT")

    return temp_file, transcript_contents


    


    The transcript this callback works fine locally, or when installed without ffmpeg in the linux web app. Not sure why it conflicts with ffmpeg when installed via container dockerfile. The code section that fails can me found on note #NOTE DEBUG"