Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (33)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6912)

  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch

    


    


  • How correctly show video with transparency in Qt with OpenCV + FFMpeg

    11 avril 2022, par TheEnigmist

    I'm trying to show a video with transparency in a Qt6 application using OpenCV + FFMPEG.
Actually those are tool versions :

    


      

    • Win 11
    • 


    • Qt 6.3.0
    • 


    • OpenCV 4.5.5 (built with CMake)
    • 


    • FFMPEG 2022-04-03-git-1291568c98-full_build-www.gyan.dev
    • 


    


    I've used a base .mov video with transparency as test (link provided below).
First of all I've converted .mov video to .webm video (VP9) and I see in output text that alpha channel remains

    


    


    ffmpeg -i '.\Retro Bars.mov' -c:v libvpx-vp9 -crf 30 -b:v 0 output.webm

    


    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2,
    ...
    Stream #0:0[0x1](eng): Video: qtrle (rle  / 0x20656C72), argb(progressive),
    ...

Output #0, webm, 
   ...
   Stream #0:0(eng): Video: vp9, yuva420p(tv, progressive),
   ...


    


    But when I show info of output file with ffmpeg it loses alpha channel :

    


    


    ffmpeg -i .\output.webm

    


    


    Input #0, matroska,webm,
    ...
    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, progressive),
    ...


    


    If I open output.webm with OBS it is shown correctly without a background, as shown in picture :
obs_load

    


    If I try to open it with OpenCV + FFMPEG it shows a black background under bars, as shown in picture :
Qt_out

    


    This is how I load video in Qt :

    


    cv::VideoCapture capture;
capture.open(filename, cv::CAP_FFMPEG);
capture.set(cv::CAP_PROP_CONVERT_RGB, false); // try forcing load alpha channel
... //in a thread
while (capture.read(frame)) {
    qDebug() << "c" << frame.channels() << "t" <<  frame.type() << "d" <<  frame.depth(); // output: c 3 t 16 d 0
    cv::cvtColor(frame, frame, cv::COLOR_BGR2RGBA); //useless since no alpha channel is detected
    img = QImage(frame.data, frame.cols, frame.rows, QImage::Format_RGBA8888);
    emit processedImage(img); // to show image in a QLabel with QPixmap::fromImage(img)
}


    


    I think the problem is when I load the video with OpenCV, it doens't detect alpha channel, since I can load correctly in other player (obs, html5, etc.)

    


    What I'm wrong with all process to show this video in Qt with transparency ?

    


    EDIT : Added dropbox link with test video + ffmpeg outputs :
sample items

    


  • avdevice/dshow : list_devices : show media type(s) per device

    21 décembre 2021, par Diederick Niehorster
    avdevice/dshow : list_devices : show media type(s) per device
    

    the list_devices option of dshow didn't indicate whether a specific
    device provides audio or video output. This patch iterates through all
    media formats of all pins exposed by the device to see what types it
    provides for capture, and prints this to the console for each device.
    Importantly, this now allows to find devices that provide both audio and
    video, and devices that provide neither.

    Signed-off-by : Diederick Niehorster <dcnieho@gmail.com>
    Reviewed-by : Roger Pack <rogerdpack2@gmail.com>

    • [DH] libavdevice/dshow.c