Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (52)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7985)

  • How to install ffmpeg on Google App Engine ?

    3 mai 2020, par UtkuBulkan

    I intend to install ffmpeg and ffprobe to my Google App Engine flex environment, how can I do this with requirements.txt as a package ?

    



    I am currently using ffmpeg-python yet, this is only a wrapper.

    



    ffmpeg-python==0.2.0
ffprobe-python==1.0.3


    



    Below is the error when I try to use ffmpeg-python wrapper, which is expected as there is no ffmpeg and ffprobe available :

    



    2020-05-03 11:42:36 default[20200503t112932]  [03/May/2020 11:42:36] ERROR    [log.py:228] Internal Server Error: /capture_thumbnail/
2020-05-03 11:42:36 default[20200503t112932]  Traceback (most recent call last):    File "/env/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner      response = get_response(request)    File "/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response      response = self.process_exception_by_middleware(e, request)    File "/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response      response = wrapped_callback(request, *callback_args, **callback_kwargs)    File "/home/vmagent/app/core/views.py", line 62, in capture_thumbnail      generate_thumbnail(request, blob_uuid)    File "/home/vmagent/app/core/videointelligence1.py", line 264, in generate_thumbnail      probe = ffmpeg.probe(video_url)    File "/env/lib/python3.6/site-packages/ffmpeg/_probe.py", line 20, in probe      p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)    File "/opt/python3.6/lib/python3.6/subprocess.py", line 729, in __init__      restore_signals, start_new_session)    File "/opt/python3.6/lib/python3.6/subprocess.py", line 1364, in _execute_child      raise child_exception_type(errno_num, err_msg, err_filename)  FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe': 'ffprobe'


    



    Can you please suggest a way, so I can use ffprobe on Google App Engine.

    


  • AppRTC : Google’s WebRTC test app and its parameters

    23 juillet 2014, par silvia

    If you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC : AppRTC.

    When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).

    We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.

    Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.

    Here are my favourite parameters :

    • hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
    • stereo=true : turns on stereo audio
    • debug=loopback : connect to yourself (great to check your own firewalls)
    • tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)

    For example, here’s how a stereo, HD loopback test would look like : https://apprtc.appspot.com/?r=82313387&hd=true&stereo=true&debug=loopback .

    This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery :

    • ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
    • ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
    • tp=[password] : password for the TURN server
    • audio=true&video=false : audio-only call
    • audio=false : video-only call
    • audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
    • audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
    • asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
    • arc=opus/48000 : preferred audio receive codec is opus at 48kHz
    • dtls=false : disable datagram transport layer security
    • dscp=true : enable DSCP
    • ipv6=true : enable IPv6

    AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).

    Have fun playing with the main and always up-to-date WebRTC application : AppRTC.

    UPDATE 12 May 2014

    AppRTC now also supports the following bitrate controls :

    • arbr=[bitrate] : set audio receive bitrate
    • asbr=[bitrate] : set audio send bitrate
    • vsbr=[bitrate] : set video receive bitrate
    • vrbr=[bitrate] : set video send bitrate

    Example usage : https://apprtc.appspot.com/?r=&asbr=128&vsbr=4096&hd=true

    The post AppRTC : Google’s WebRTC test app and its parameters first appeared on ginger’s thoughts.

  • How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead

    11 janvier 2023, par Tiger Yang

    My question (read entire post for context) :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.

    


    Context and problem :

    


    I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.

    


    same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure

    


    What I've tried so far :

    


    Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...

    


    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }


    


    ...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.

    


    I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.

    


    At this point, I think I'm completely out of my depth, so :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)