
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (14556)
-
Senior Software Engineer for Enterprise Analytics Platform
28 janvier 2016, par Matthieu Aubry — UncategorizedWe’re looking for a lead developer to work on Piwik Enterprise Analytics core platform software. We have some exciting challenges to solve and need you !
You’ll be working with both fellow employees and our open-source community. Piwik staff lives in New Zealand, Europe (Poland, Germany) and in the U.S. We do the vast majority of our collaboration online.
We are a small, flexible team, so when you come aboard, you will play an integral part in engineering. As a leader you’ll help us to prioritise work and grow our community. You’ll help to create a welcoming environment for new contributors and set an example with your development practices and communications skills. You will be working closely with our CTO to build a future for Piwik.
Key Responsibilities
- Strong competency coding in PHP and JavaScript.
- Scaling existing backend system to handle ever increasing amounts of traffic and new product requirements.
- Outstanding communication and collaboration skills.
- Drive development and documentation of internal and external APIs (Piwik is an open platform).
- Help make our development practices better and reduce friction from idea to deployment.
- Mentor junior engineers and set the stage for personal growth.
Minimum qualifications
- 5+ years of experience in product development, security, usable interface design.
- 5+ years experience building successful production software systems.
- Strong competency in PHP5 and JavaScript application development.
- Skill at writing tests and reviewing code.
- Strong analytical skills.
Location
- Remote work position !
- or you can join us in our office based in Wellington, New Zealand or in Wrocław, Poland.
Benefits
- Competitive salary.
- Remote work is possible.
- Yearly meetup with the whole team abroad.
- Be part of a successful open source company and community.
- In our Wellington (NZ) and Wroclaw (PL) offices : snacks, coffee, nap room, Table football, Ping pong…
- Regular events.
- Great team of people.
- Exciting projects.
Learn more
Learn more what it’s like to work on Piwik in our blog post
About Piwik
At Piwik we develop the leading open source web analytics platform, used by more than one million websites worldwide. Our vision is to help the world liberate their analytics data by building the best open alternative to Google Analytics.
The Piwik platform collects, stores and processes a lot of information : hundreds of millions of data points each month. We create intuitive, simple and beautiful reports that delight our users.
Apply online
To apply for this position, please Apply online here. We look forward to receiving your applications !
-
How to obtain time markers for video splitting using python/OpenCV
30 mars 2016, par Bleddyn Raw-ReesHi I’m new to the world of programming and computer vision so please bare with me.
I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.
As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.
What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.
Here is the code should you wish to see it...
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# otherwise, we are reading from a video file
else:
camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
# text
(grabbed, frame) = camera.read()
text = "Unoccupied"
# if the frame could not be grabbed, then we have reached the end
# of the video
if not grabbed:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()Thanks !
-
Docker Image for Torchvsion and Torchaudio with FFmpeg with nvidia headers
22 juillet 2024, par Felipe MarraI'm trying to perform GPU video encoding/decoding using PyTorch. This means compiling FFmpeg from source along with Nvidia codec headers.


Currently, my docker image looks like this :


FROM nvcr.io/nvidia/pytorch:24.06-py3

RUN apt-get -yqq update && \
 apt-get install -yq --no-install-recommends ca-certificates expat libgomp1 && \
 apt-get autoremove -y && \
 apt-get clean -y

RUN apt-get install -y bash \
 autoconf \
 automake \
 build-essential \
 cmake \
 git-core \
 libass-dev \
 libfreetype6-dev \
 libgnutls28-dev \
 libmp3lame-dev \
 libnuma1 \
 libnuma-dev \
 libsdl2-dev \
 libtool \
 libva-dev \
 libvdpau-dev \
 libvorbis-dev \
 libxcb1-dev \
 libxcb-shm0-dev \
 libxcb-xfixes0-dev \
 libc6 \
 libc6-dev \
 meson \
 ninja-build \
 pkg-config \
 texinfo \
 unzip \
 wget \
 yasm \
 zlib1g-dev && \
 apt-get -yqq update

# Miniconda
COPY .devcontainer/env.yml .

RUN mkdir -p ~/miniconda3 && \
 wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh && \
 bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 && \
 rm -rf ~/miniconda3/miniconda.sh && \
 ~/miniconda3/bin/conda init bash && \
 ~/miniconda3/bin/conda init zsh && \
 ~/miniconda3/bin/conda env update --file env.yml

# FFMPEG
RUN git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git && \
 cd nv-codec-headers && \
 make install

RUN git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg/ && \
 cd ffmpeg && \
 ./configure \
 --prefix="$CONDA_PREFIX"\
 --enable-nonfree \
 --enable-cuda-nvcc \
 --enable-libnpp \
 --extra-cflags=-I/usr/local/cuda/include \
 --extra-ldflags=-L/usr/local/cuda/lib64 \
 --disable-static \
 --enable-shared && \
 make -k -j 8 && \
 make install && \
 ldconfig

ENV LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH



The env.yml contains the following :


name: base
channels:
 - pytorch
 - nvidia
 - conda-forge
 - defaults
dependencies:
 - pytorch=2.3.1=py3.12_cuda12.1_cudnn8.9.2_0
 - torchvision=0.18.1=py312_cu121
 - torchaudio=2.3.1=py312_cu121
 - pytorch-cuda=12.1=ha16c6d3_5



To test the image, I'm running :


import torchvision
from torchaudio.utils import ffmpeg_utils

print("Library versions:")
print(ffmpeg_utils.get_versions())
print("\nBuild config:")
print(ffmpeg_utils.get_build_config())
print("\nDecoders:")
print([k for k in ffmpeg_utils.get_video_decoders().keys() if "cuvid" in k])
print("\nEncoders:")
print([k for k in ffmpeg_utils.get_video_encoders().keys() if "nvenc" in k])

torchvision.set_video_backend('cuda')



I've also created this repo so that other people will be able to just run this image once its problems get solved.


What is going on is that outside the Conda environment, FFmpeg is configured as expected. But inside it, I'm getting the folowing error :


libopenh264.so.5: cannot open shared object file: No such file or directory



By following this comment on a torchvision issue, I was able to make
ffprobe -hide_banner -decoders | grep h264
andffmpeg -hide_banner -encoders | grep 264
yield the expected outputs, as shown in torchaudio's doc, inside the Conda environment. But then torchaudio wasn't able to find FFmpeg.

I'm new to the whole ecosystem (linux, docker and torch), and would appreciate it if someone could help me build this image, as I think that this should actually be officially provided by torch.