Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (70)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10131)

  • Senior Software Engineer for Enterprise Analytics Platform

    28 janvier 2016, par Matthieu Aubry — Uncategorized

    We’re looking for a lead developer to work on Piwik Enterprise Analytics core platform software. We have some exciting challenges to solve and need you !

    You’ll be working with both fellow employees and our open-source community. Piwik staff lives in New Zealand, Europe (Poland, Germany) and in the U.S. We do the vast majority of our collaboration online.

    We are a small, flexible team, so when you come aboard, you will play an integral part in engineering. As a leader you’ll help us to prioritise work and grow our community. You’ll help to create a welcoming environment for new contributors and set an example with your development practices and communications skills. You will be working closely with our CTO to build a future for Piwik.

    Key Responsibilities

    • Strong competency coding in PHP and JavaScript.
    • Scaling existing backend system to handle ever increasing amounts of traffic and new product requirements.
    • Outstanding communication and collaboration skills.
    • Drive development and documentation of internal and external APIs (Piwik is an open platform).
    • Help make our development practices better and reduce friction from idea to deployment.
    • Mentor junior engineers and set the stage for personal growth.

    Minimum qualifications

    • 5+ years of experience in product development, security, usable interface design.
    • 5+ years experience building successful production software systems.
    • Strong competency in PHP5 and JavaScript application development.
    • Skill at writing tests and reviewing code.
    • Strong analytical skills.

    Location

    • Remote work position !
    • or you can join us in our office based in Wellington, New Zealand or in Wrocław, Poland.

    Benefits

    • Competitive salary.
    • Remote work is possible.
    • Yearly meetup with the whole team abroad.
    • Be part of a successful open source company and community.
    • In our Wellington (NZ) and Wroclaw (PL) offices : snacks, coffee, nap room, Table football, Ping pong…
    • Regular events.
    • Great team of people.
    • Exciting projects.

    Learn more

    Learn more what it’s like to work on Piwik in our blog post

    About Piwik

    At Piwik we develop the leading open source web analytics platform, used by more than one million websites worldwide. Our vision is to help the world liberate their analytics data by building the best open alternative to Google Analytics.

    The Piwik platform collects, stores and processes a lot of information : hundreds of millions of data points each month. We create intuitive, simple and beautiful reports that delight our users.

    Apply online

    To apply for this position, please Apply online here. We look forward to receiving your applications !

  • Merge 3 ffmpeg audio tracks into one file without normalized audio

    29 juillet 2022, par Meh.

    I'm trying to essentially get the command line version of dropping files into Audacity, where they all start at the same time and when I export it, it will export as one file.

    


    I'm using this command ffmpeg -i <file> -i <file> -i <file> -filter_complex amix=inputs=3 <out></out></file></file></file> from another question and it makes the audio/end product softer. I can definitely tell a difference without any checking. I am working with files from the musdb18 database so I can make instrumentals manually for a project.

    &#xA;

    How can I combine multiple audio files without ffmpeg doing something weird like changing the db or normalizing the audio ?

    &#xA;

  • Python aiortc : How to record audio and video come from client in the same file ? [closed]

    22 décembre 2024, par Chris P

    I have an aiortc app, which html5 client send data (microphone,camera) into server.

    &#xA;

    In server side i sucessfully played this two streams seperatly.

    &#xA;

    But when i try to record using aiortc MediaRecorder helper class, only the voice is recording, and the video is dropped (mp4 format).

    &#xA;

    I think this is due sync issue.

    &#xA;

    The audio_frame and the video_frame (each pair of them) have different time_base.&#xA;(I don't know if this is strange).&#xA;But it also has different time.

    &#xA;

    I can share code, but couldn't help at all right now.

    &#xA;

    Edit : I also tried to synchronize them client side with no luck

    &#xA;

    // Synchronize tracks&#xA;async function synchronizeTracks(localStream) {&#xA;    const videoTrack = localStream.getVideoTracks()[0];&#xA;    const audioTrack = localStream.getAudioTracks()[0];&#xA;&#xA;    const syncedStream = new MediaStream();&#xA;&#xA;    // Add tracks to the synchronized stream&#xA;    syncedStream.addTrack(videoTrack);&#xA;    syncedStream.addTrack(audioTrack);&#xA;&#xA;    // Video and audio processors&#xA;    const videoProcessor = new MediaStreamTrackProcessor({ track: videoTrack });&#xA;    const audioProcessor = new MediaStreamTrackProcessor({ track: audioTrack });&#xA;&#xA;    const videoReader = videoProcessor.readable.getReader();&#xA;    const audioReader = audioProcessor.readable.getReader();&#xA;&#xA;    const videoWriter = new MediaStreamTrackGenerator({ kind: "video" }).writable.getWriter();&#xA;    const audioWriter = new MediaStreamTrackGenerator({ kind: "audio" }).writable.getWriter();&#xA;&#xA;    const syncThreshold = 5; // Maximum allowable time difference in milliseconds&#xA;    let baseTimestamp = null;&#xA;&#xA;    async function processTracks() {&#xA;        try {&#xA;            while (true) {&#xA;                const [videoResult, audioResult] = await Promise.all([&#xA;                    videoReader.read(),&#xA;                    audioReader.read(),&#xA;                ]);&#xA;&#xA;                if (videoResult.done || audioResult.done) break;&#xA;&#xA;                const videoFrame = videoResult.value;&#xA;                const audioFrame = audioResult.value;&#xA;&#xA;                // Initialize base timestamp if needed&#xA;                if (baseTimestamp === null) {&#xA;                    baseTimestamp = Math.min(videoFrame.timestamp, audioFrame.timestamp);&#xA;                }&#xA;&#xA;                const videoRelativeTimestamp = videoFrame.timestamp - baseTimestamp;&#xA;                const audioRelativeTimestamp = audioFrame.timestamp - baseTimestamp;&#xA;&#xA;                const timeDifference = videoRelativeTimestamp - audioRelativeTimestamp;&#xA;&#xA;                if (Math.abs(timeDifference) &lt;= syncThreshold) {&#xA;                    // Frames are in sync&#xA;                    await videoWriter.write(videoFrame);&#xA;                    await audioWriter.write(audioFrame);&#xA;                } else if (timeDifference > 0) {&#xA;                    // Video is ahead, wait for audio to catch up&#xA;                    await audioWriter.write(audioFrame);&#xA;                    // Reuse video frame on the next loop&#xA;                    videoReader.releaseLock();&#xA;                } else {&#xA;                    // Audio is ahead, wait for video to catch up&#xA;                    await videoWriter.write(videoFrame);&#xA;                    // Reuse audio frame on the next loop&#xA;                    audioReader.releaseLock();&#xA;                }&#xA;&#xA;                // Release frames&#xA;                videoFrame.close();&#xA;                audioFrame.close();&#xA;            }&#xA;        } catch (error) {&#xA;            console.error("Error in track synchronization:", error);&#xA;        } finally {&#xA;            videoReader.releaseLock();&#xA;            audioReader.releaseLock();&#xA;            videoWriter.close();&#xA;            audioWriter.close();&#xA;        }&#xA;    }&#xA;&#xA;    processTracks();&#xA;&#xA;    return syncedStream;&#xA;}&#xA;&#xA;

    &#xA;

    python code to improve :

    &#xA;

    class SyncClientTracksForRecording:&#xA;    def __init__(self, audio_track, video_track, audio_track_sync_q, video_track_sync_q):&#xA;        self.audio_track = audio_track&#xA;        self.video_track = video_track&#xA;        self.audio_track_sync_q = audio_track_sync_q&#xA;        self.video_track_sync_q = video_track_sync_q&#xA;&#xA;        # Time bases&#xA;        self.audio_time_base = fractions.Fraction(1, 48000)  # 48 kHz audio&#xA;        self.video_time_base = fractions.Fraction(1, 90000)  # 90 kHz video&#xA;&#xA;        # Elapsed time tracking&#xA;        self.audio_elapsed_time = 0.0&#xA;        self.video_elapsed_time = 0.0&#xA;&#xA;        # Stop signal for synchronization loop&#xA;        self.stop_signal = False&#xA;&#xA;    async def sync(self):&#xA;        while not self.stop_signal:&#xA;            try:&#xA;                # Receive audio and video frames concurrently&#xA;                audio_task = asyncio.create_task(self.audio_track.recv())&#xA;                video_task = asyncio.create_task(self.video_track.recv())&#xA;&#xA;                audio_frame, video_frame = await asyncio.gather(audio_task, video_task)&#xA;&#xA;                # Set time bases&#xA;                audio_frame.time_base = self.audio_time_base&#xA;                video_frame.time_base = self.video_time_base&#xA;&#xA;                # Calculate and assign PTS values&#xA;                audio_frame.pts = int(self.audio_elapsed_time / float(self.audio_time_base))&#xA;                video_frame.pts = int(self.video_elapsed_time / float(self.video_time_base))&#xA;&#xA;                # Increment elapsed time&#xA;                self.audio_elapsed_time &#x2B;= 0.020  # Assuming 20 ms audio frame duration&#xA;                self.video_elapsed_time &#x2B;= 1 / 30  # Assuming 30 fps video frame rate&#xA;&#xA;                # Enqueue frames&#xA;                await asyncio.gather(&#xA;                    self.audio_track_sync_q.put(audio_frame),&#xA;                    self.video_track_sync_q.put(video_frame),&#xA;                )&#xA;&#xA;            except Exception as e:&#xA;                print(f"Error in sync loop: {e}")&#xA;                break&#xA;&#xA;    def stop(self):&#xA;        """Stop the synchronization loop."""&#xA;        self.stop_signal = True&#xA;&#xA;

    &#xA;