Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (13897)

  • Permissions issue with Python and ffmpeg on a Mac

    13 avril 2020, par EventHorizon

    I am fairly new to Python ( 4 weeks), and I have been struggling with this all day.

    



    I am using MacOS 10.13, Python 3.7 via Anaconda Navigator 1.9.12 and Spyder 4.0.1.

    



    Somehow (only a noob, remember) I had 2 Anaconda environments. I don't do production code, just research, so I figured I would make life simple and just use the base environment. I deleted the other environment.

    



    I had previously got FFmpeg working and was able to do frame grabs, build mpeg animations, and convert them to gifs for blog posts and such. I had FFmpeg installed in the directories associated with the deleted environment, so it went away.

    



    No worries, I got the git URL, used Terminal to install it in /opt/anaconda3/bin. It's all there and I can run FFmpeg from the Terminal.

    



    My problem : When I attempt to run a module that previously worked fine, I get the following message :

    



    [Errno 13] Permission denied : '/opt/anaconda3/bin/ffmpeg'

    



    In my module I set the default location of FFmpeg : plt.rcParams['animation.ffmpeg_path'] = '/opt/anaconda3/bin/ffmpeg'

    



    In my module I have the following lines :

    



    writer = animation.FFMpegWriter(fps=frameRate, metadata=metadata)
writer.setup(fig, "animation.mp4", 100)


    



    This calls matplotlib's 'animation.py', which runs the following :

    



    def setup(self, fig, outfile, dpi=None):
    '''
    Perform setup for writing the movie file.

    Parameters
    ----------
    fig : `~matplotlib.figure.Figure`
        The figure object that contains the information for frames
    outfile : str
        The filename of the resulting movie file
    dpi : int, optional
        The DPI (or resolution) for the file.  This controls the size
        in pixels of the resulting movie file. Default is fig.dpi.
    '''
    self.outfile = outfile
    self.fig = fig
    if dpi is None:
        dpi = self.fig.dpi
    self.dpi = dpi
    self._w, self._h = self._adjust_frame_size()

    # Run here so that grab_frame() can write the data to a pipe. This
    # eliminates the need for temp files.
    self._run()

def _run(self):
    # Uses subprocess to call the program for assembling frames into a
    # movie file.  *args* returns the sequence of command line arguments
    # from a few configuration options.
    command = self._args()
    _log.info('MovieWriter.run: running command: %s', command)
    PIPE = subprocess.PIPE
    self._proc = subprocess.Popen(
        command, stdin=PIPE, stdout=PIPE, stderr=PIPE,
        creationflags=subprocess_creation_flags)


    



    Everything works fine up to the last line (i.e. 'command' looks like a well-formatted FFmpeg command line, PIPE returns -1) but subprocess.Popen() bombs out with the error message above.

    



    I have tried changing file permissions - taking a sledgehammer approach and setting everything in /opt/anaconda3/bin/ffmpeg to 777, read, write, and execute. But that doesn't seem to make any difference. I really am clueless when it comes to Apple's OS, file permissions, etc. Any suggestions ?

    


  • Convert Webrtc track stream to URL (RTSP/UDP/RTP/Http) in Video tag

    19 juillet 2020, par Zeeshan Younis

    I am new in WebRTC and i have done client/server connection, from client i choose WebCam and post stream to server using Track and on Server side i am getting that track and assign track stream to video source. Everything till now fine but problem is now i include AI(Artificial Intelligence) and now i want to convert my track stream to URL maybe UDP/RTSP/RTP etc. So AI will use that URL for object detection. I don't know how we can convert track stream to URL.
Although there is a couple of packages like https://ffmpeg.org/ and RTP to Webrtc etc, i am using Nodejs, Socket.io and Webrtc, below you can check my client and server side code for getting and posting stream, i am following thi github code https://github.com/Basscord/webrtc-video-broadcast.
Now my main concern is to make track as a URL for video tag, is it possible or not or please suggest, any help would be appreciated.

    


    Server.js

    


    This is nodejs server code


    

    

    const express = require("express");
const app = express();

let broadcaster;
const port = 4000;

const http = require("http");
const server = http.createServer(app);

const io = require("socket.io")(server);
app.use(express.static(__dirname + "/public"));

io.sockets.on("error", e => console.log(e));
io.sockets.on("connection", socket => {
  socket.on("broadcaster", () => {
    broadcaster = socket.id;
    socket.broadcast.emit("broadcaster");
  });
  socket.on("watcher", () => {
    socket.to(broadcaster).emit("watcher", socket.id);
  });
  socket.on("offer", (id, message) => {
    socket.to(id).emit("offer", socket.id, message);
  });
  socket.on("answer", (id, message) => {
    socket.to(id).emit("answer", socket.id, message);
  });
  socket.on("candidate", (id, message) => {
    socket.to(id).emit("candidate", socket.id, message);
  });
  socket.on("disconnect", () => {
    socket.to(broadcaster).emit("disconnectPeer", socket.id);
  });
});
server.listen(port, () => console.log(`Server is running on port ${port}`));

    


    


    



    Broadcast.js
This is the code for emit stream(track)


    

    

    const peerConnections = {};
const config = {
  iceServers: [
    {
      urls: ["stun:stun.l.google.com:19302"]
    }
  ]
};

const socket = io.connect(window.location.origin);

socket.on("answer", (id, description) => {
  peerConnections[id].setRemoteDescription(description);
});

socket.on("watcher", id => {
  const peerConnection = new RTCPeerConnection(config);
  peerConnections[id] = peerConnection;

  let stream = videoElement.srcObject;
  stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));

  peerConnection.onicecandidate = event => {
    if (event.candidate) {
      socket.emit("candidate", id, event.candidate);
    }
  };

  peerConnection
    .createOffer()
    .then(sdp => peerConnection.setLocalDescription(sdp))
    .then(() => {
      socket.emit("offer", id, peerConnection.localDescription);
    });
});

socket.on("candidate", (id, candidate) => {
  peerConnections[id].addIceCandidate(new RTCIceCandidate(candidate));
});

socket.on("disconnectPeer", id => {
  peerConnections[id].close();
  delete peerConnections[id];
});

window.onunload = window.onbeforeunload = () => {
  socket.close();
};

// Get camera and microphone
const videoElement = document.querySelector("video");
const audioSelect = document.querySelector("select#audioSource");
const videoSelect = document.querySelector("select#videoSource");

audioSelect.onchange = getStream;
videoSelect.onchange = getStream;

getStream()
  .then(getDevices)
  .then(gotDevices);

function getDevices() {
  return navigator.mediaDevices.enumerateDevices();
}

function gotDevices(deviceInfos) {
  window.deviceInfos = deviceInfos;
  for (const deviceInfo of deviceInfos) {
    const option = document.createElement("option");
    option.value = deviceInfo.deviceId;
    if (deviceInfo.kind === "audioinput") {
      option.text = deviceInfo.label || `Microphone ${audioSelect.length + 1}`;
      audioSelect.appendChild(option);
    } else if (deviceInfo.kind === "videoinput") {
      option.text = deviceInfo.label || `Camera ${videoSelect.length + 1}`;
      videoSelect.appendChild(option);
    }
  }
}

function getStream() {
  if (window.stream) {
    window.stream.getTracks().forEach(track => {
      track.stop();
    });
  }
  const audioSource = audioSelect.value;
  const videoSource = videoSelect.value;
  const constraints = {
    audio: { deviceId: audioSource ? { exact: audioSource } : undefined },
    video: { deviceId: videoSource ? { exact: videoSource } : undefined }
  };
  return navigator.mediaDevices
    .getUserMedia(constraints)
    .then(gotStream)
    .catch(handleError);
}

function gotStream(stream) {
  window.stream = stream;
  audioSelect.selectedIndex = [...audioSelect.options].findIndex(
    option => option.text === stream.getAudioTracks()[0].label
  );
  videoSelect.selectedIndex = [...videoSelect.options].findIndex(
    option => option.text === stream.getVideoTracks()[0].label
  );
  videoElement.srcObject = stream;
  socket.emit("broadcaster");
}

function handleError(error) {
  console.error("Error: ", error);
}

    


    


    



    RemoteServer.js
This code is getting track and assign to video tag


    

    

    let peerConnection;
const config = {
  iceServers: [
    {
      urls: ["stun:stun.l.google.com:19302"]
    }
  ]
};

const socket = io.connect(window.location.origin);
const video = document.querySelector("video");

socket.on("offer", (id, description) => {
  peerConnection = new RTCPeerConnection(config);
  peerConnection
    .setRemoteDescription(description)
    .then(() => peerConnection.createAnswer())
    .then(sdp => peerConnection.setLocalDescription(sdp))
    .then(() => {
      socket.emit("answer", id, peerConnection.localDescription);
    });
  peerConnection.ontrack = event => {
    video.srcObject = event.streams[0];
  };
  peerConnection.onicecandidate = event => {
    if (event.candidate) {
      socket.emit("candidate", id, event.candidate);
    }
  };
});

socket.on("candidate", (id, candidate) => {
  peerConnection
    .addIceCandidate(new RTCIceCandidate(candidate))
    .catch(e => console.error(e));
});

socket.on("connect", () => {
  socket.emit("watcher");
});

socket.on("broadcaster", () => {
  socket.emit("watcher");
});

socket.on("disconnectPeer", () => {
  peerConnection.close();
});

window.onunload = window.onbeforeunload = () => {
  socket.close();
};

    


    


    



  • configure : treat unrecognized flags as errors on MSVC

    20 juillet, par Kacper Michajłow
    configure : treat unrecognized flags as errors on MSVC
    

    This is important for feature checking to work correctly.

    It can happen that an unrecognized flag passes the compile test with
    only a warning, while failing in preprocessor-only check with an error.
    This causes all test_cpp calls to fail and silently produces arguably
    broken MSVC builds. Also, all check_* functions don't work as expected,
    because they assume the check passed, even though there was a warning.

    Additionally, this brings the behavior in line with GCC/Clang based
    builds, failing early on unrecognized flags instead of silently
    continuing with warnings in the log.

    The /options:strict option is available starting in Visual Studio 2022
    version 17.0. Because of that, we cannot use check_cflags alone, as it
    would add this flag for older MSVC versions and produce warnings. So, we
    need to manually perform a version check. A bit of a chicken and egg
    problem.

    Perform the version check before adding extra flags from the user to
    ensure we don't silently fail the preprocessor check due to invalid
    flags on older MSVC versions. Note that behavior differs depending on
    whether we are compiling or only preprocessing.

    This fixes silent different between handling :

    `cl.exe -P foo c.c`
    c1 : fatal error C1083 : Cannot open source file : 'foo' : No such file
    or directory

    `cl.exe -c foo c.c`
    cl : Command line warning D9024 : unrecognized source file type
    'foo', object file assumed

    Where -P fails, while -c throws warnings only. Of course `foo` is
    completely bogus here, but depends on the flags or configuration this
    may be unsupported argument. Or even some converted path from MSYS when
    run inside it. The objective is to always error out instead of silently
    hiding this.

    Use check_cflags even after the _MSC_FULL_VER check, for non-MSVC
    compilers. For example Clang-CL impersonate MSVC, but does not support
    - options:strict flag currently.

    Signed-off-by : Kacper Michajłow <kasper93@gmail.com>

    • [DH] configure