Recherche avancée

Médias (91)

Autres articles (40)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (6582)

  • Show video length in HLS player before all TS files are created

    14 novembre 2022, par WilliamTaco

    I have a spring-boot backend which on request (on demand) uses ffmpeg to create a m3u8 playlist with its ts files from a mp4 file. So basically my react frontend requests the index.m3u8 from the backend and if it doesnt already exist it creates it and then start serving it with its ts files. This causes the frontend HLS player to show the length of the video to the combined length of the generated chunks which gets longer as time goes on until its fully there. It totally makes sense but was wondering what the correct way of showing the full length in the player even though its not fully created yet ?

    


    Im using react-hls-player for playing the stream and spring-boot + a java ffmpeg wrapper to transcode the video.

    


    Might be thinking about this the wrong way so feel free to correct me if im in the wrong path !

    


  • Show progress bar of a ffmpeg video convertion

    13 juin 2022, par stackexchange.com-upvm25mz

    I'm trying to print a progress bar while executing ffmpeg but I'm having trouble getting the total number of frames and the current frame being processed. The error I get is AttributeError: 'NoneType' object has no attribute 'groups'. I've copied the function from this program and for some reason it works there but not here, even though I haven't changed that part.

    


    main.py

    


    pattern_duration = re.compile(
    'duration[ \t\r]?:[ \t\r]?(.+?),[ \t\r]?start', re.IGNORECASE)
pattern_progress = re.compile('time=(.+?)[ \t\r]?bitrate', re.IGNORECASE)

def execute_ffmpeg(self, manager, cmd):
    proc = expect.spawn(cmd, encoding='utf-8')
    self.update_progress_bar(proc, manager)
    self.raise_ffmpeg_error(proc)

def update_progress_bar(self, proc, manager):
    total = self.get_total_frames(proc)
    cont = 0
    pbar = self.initialize_progress_bar(manager)
    try:
        proc.expect(pattern_duration)
        while True:
            progress = self.get_current_frame(proc)
            percent = progress / total * 100
            pbar.update(percent - cont)
            cont = percent
    except expect.EOF:
        pass
    finally:
        if pbar is not None:
            pbar.close()

def raise_ffmpeg_error(self, proc):
    proc.expect(expect.EOF)
    res = proc.before
    res += proc.read()
    exitstatus = proc.wait()
    if exitstatus:
        raise ffmpeg.Error('ffmpeg', '', res)

def initialize_progress_bar(self, manager):
    pbar = None
    pbar = manager.counter(
        total=100,
        desc=self.path.rsplit(os.path.sep, 1)[-1],
        unit='%',
        bar_format=BAR_FMT,
        counter_format=COUNTER_FMT,
        leave=False
    )
    return pbar

def get_total_frames(self, proc):
    return sum(map(lambda x: float(
        x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))

def get_current_frame(self, proc):
    proc.expect(pattern_progress)
    return sum(map(lambda x: float(
                x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))


    


  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch