Recherche avancée

Médias (2)

Mot : - Tags -/rotation

Autres articles (57)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11176)

  • Save video using opencv with H264 codec

    31 octobre 2023, par ldiaz997

    This is beyond me and I don't know what I'm doing wrong. I have read that in order to have my video in h265 codec, I need to build opencv from source. Well, I did that, and I also did it for ffmpeg Docker ffmpeg Compiler. But I'm trying to run my application using docker, and I still can't get over the error :

    


    [ERROR:0@93.327] global cap_ffmpeg_impl.hpp:3018 open Could not find encoder for codec_id=27, error: Encoder not found
[ERROR:0@93.327] global cap_ffmpeg_impl.hpp:3093 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter


    


    Dockerfile :

    


    FROM python:3.10.12-slim-buster

RUN apt-get update

# Set the working directory in the container
WORKDIR /app

# Copy the application code into the container
COPY . .

# Set ffmpeg and ffprobe binary files
RUN mv ffmpeg /usr/local/bin
RUN mv ffprobe /usr/local/bin

# Build opencv from source, to be able to use h264 codec.
RUN apt-get install -y cmake \
    gcc \
    g++ \
    python3-numpy \
    libavcodec-dev \
    libavformat-dev \
    libswscale-dev \
    libgstreamer-plugins-base1.0-dev \
    libgstreamer1.0-dev \
    libpng-dev \
    libjpeg-dev \
    libopenexr-dev \
    libtiff-dev \
    libwebp-dev \
    git

RUN git clone --depth 1 --branch 4.8.0 https://github.com/opencv/opencv.git && \
    git clone --depth 1 --branch 4.8.0 https://github.com/opencv/opencv_contrib.git && \
    cd opencv && \
    mkdir build && \
    cd build && \
    cmake -D OPENCV_EXTRA_MODULES_PATH=/app/opencv_contrib/modules ../ && \
    make -j"$(nproc)" && \
    make install

# Remove opencv github project
RUN rm -r opencv

# Remove opencv_contrib github project
RUN rm -r opencv_contrib

# Prevents Python from writing pyc files to disc
ENV PYTHONDONTWRITEBYTECODE 1

# Prevents Python from buffering stdout and stderr
ENV PYTHONUNBUFFERED 1

# Install python dependencies
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt

# Install netcat to know when rabbitmq is running
RUN apt-get install -y netcat

# Set execute permissions
RUN chmod +x entrypoint.sh
RUN chmod +x web_start.sh

ENTRYPOINT ["./entrypoint.sh"]


    


    I ran the command ./ffmpeg -i 57b3e3a7-ad22-469d-a7ff-cf76ba780664 -vcodec libx264 -acodec aac output.mp4 to test ffmpeg and this was the result.

    


    ffmpeg version N-112515-gba6a5e7a3d Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
  configuration: --prefix=/root/ffmpeg_build --pkg-config-flags=--static --extra-libs=-static --extra-cflags=--static --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/root/bin --enable-gpl --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
  libavutil      58. 27.100 / 58. 27.100
  libavcodec     60. 30.102 / 60. 30.102
  libavformat    60. 15.101 / 60. 15.101
  libavdevice    60.  2.101 / 60.  2.101
  libavfilter     9. 11.100 /  9. 11.100
  libswscale      7.  4.100 /  7.  4.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '57b3e3a7-ad22-469d-a7ff-cf76ba780664':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2023-10-30T15:34:32.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 13 Pro Max
    com.apple.quicktime.software: 16.6
    com.apple.quicktime.creationdate: 2023-10-30T11:34:32-0400
  Duration: 00:00:03.60, start: 0.000000, bitrate: 16264 kb/s
  Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 16120 kb/s, 29.99 fps, 29.97 tbr, 600 tbn (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Video
      vendor_id       : [0][0][0][0]
      encoder         : H.264
    Side data:
      displaymatrix: rotation of -90.00 degrees
  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 89 kb/s (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
  Stream #0:2[0x3](und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Metadata
  Stream #0:3[0x4](und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Metadata
  Stream #0:4[0x5](und): Data: none (mebx / 0x7862656D), 34 kb/s (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Metadata
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x5ae4c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
[libx264 @ 0x5ae4c00] profile High, level 4.0
[libx264 @ 0x5ae4c00] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=30 lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    com.apple.quicktime.creationdate: 2023-10-30T11:34:32-0400
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 13 Pro Max
    com.apple.quicktime.software: 16.6
    encoder         : Lavf60.15.101
  Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1080x1920, q=2-31, 29.97 fps, 30k tbn (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Video
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.30.102 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
      displaymatrix: rotation of -0.00 degrees
  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
    Metadata:
      creation_time   : 2023-10-30T15:34:32.000000Z
      handler_name    : Core Media Audio
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.30.102 aac
[out#0/mp4 @ 0x5ae3440] video:2773kB audio:31kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.157082%
frame=  108 fps= 74 q=-1.0 Lsize=    2809kB time=00:00:03.59 bitrate=6393.3kbits/s speed=2.47x    
[libx264 @ 0x5ae4c00] frame I:4     Avg QP:22.27  size: 48408
[libx264 @ 0x5ae4c00] frame P:104   Avg QP:24.58  size: 25440
[libx264 @ 0x5ae4c00] mb I  I16..4: 10.3% 82.9%  6.8%
[libx264 @ 0x5ae4c00] mb P  I16..4:  4.6% 18.1%  0.8%  P16..4: 40.3%  6.9%  4.1%  0.0%  0.0%    skip:25.3%
[libx264 @ 0x5ae4c00] 8x8 transform intra:78.0% inter:85.0%
[libx264 @ 0x5ae4c00] coded y,uvDC,uvAC intra: 44.9% 29.1% 0.1% inter: 22.5% 23.3% 0.0%
[libx264 @ 0x5ae4c00] i16 v,h,dc,p: 17% 49% 14% 19%
[libx264 @ 0x5ae4c00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 25% 37%  3%  3%  5%  3%  2%  4%
[libx264 @ 0x5ae4c00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 29% 30% 17%  3%  4%  8%  3%  2%  3%
[libx264 @ 0x5ae4c00] i8c dc,h,v,p: 67% 20% 12%  0%
[libx264 @ 0x5ae4c00] Weighted P-Frames: Y:1.9% UV:0.0%
[libx264 @ 0x5ae4c00] ref P L0: 61.8% 10.4% 18.3%  9.4%  0.2%
[libx264 @ 0x5ae4c00] kb/s:6303.40
[aac @ 0x68c9880] Qavg: 119.986


    


    The resulting video had an h264 codec. In my opinion, the problem is in opencv. Basically this is what I do in my python code :

    


    cap = cv2.VideoCapture(video)
shoot_frames = []
while True:
    ret, img = cap.read()
    if not ret:
       break
    if some_condition:
       shoot_frames.append(img)
    if len(shoot_frames) > 41:
       out1 = cv2.VideoWriter(upload_path(name , dir), cv2.VideoWriter_fourcc(*'avc1'), int(fps), (int(width), int(height)), True)
       for shoot_frame in shoot_frames:
           out1.write(shoot_frame)
       out1.release()
       shoot_frames = []


    


    Output from print(cv2.getBuildInformation()) :

    


    General configuration for OpenCV 4.8.1 =====================================
  Version control:               4.8.1-dirty

  Platform:
    Timestamp:                   2023-09-27T14:20:56Z
    Host:                        Linux 5.15.0-1046-azure x86_64
    CMake:                       3.27.5
    CMake generator:             Unix Makefiles
    CMake build tool:            /bin/gmake
    Configuration:               Release

  CPU/HW features:
    Baseline:                    SSE SSE2 SSE3
      requested:                 SSE3
    Dispatched code generation:  SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
      requested:                 SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
      SSE4_1 (16 files):         + SSSE3 SSE4_1
      SSE4_2 (1 files):          + SSSE3 SSE4_1 POPCNT SSE4_2
      FP16 (0 files):            + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
      AVX (7 files):             + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
      AVX2 (35 files):           + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
      AVX512_SKX (5 files):      + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX

  C/C++:
    Built as dynamic libs?:      NO
    C++ standard:                11
    C++ Compiler:                /opt/rh/devtoolset-10/root/usr/bin/c++  (ver 10.2.1)
    C++ flags (Release):         -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /opt/rh/devtoolset-10/root/usr/bin/cc
    C flags (Release):           -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/ffmpeg_build/lib  -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    Linker flags (Debug):        -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/ffmpeg_build/lib  -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined  
    ccache:                      YES
    Precompiled headers:         NO
    Extra dependencies:          /lib64/libopenblas.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent /usr/local/lib/libpng.so /lib64/libz.so dl m pthread rt
    3rdparty dependencies:       libprotobuf ade ittnotify libjpeg-turbo libwebp libtiff libopenjp2 IlmImf quirc ippiw ippicv

  OpenCV modules:
    To be built:                 calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio
    Disabled:                    world
    Disabled by dependency:      -
    Unavailable:                 java python2 ts
    Applications:                -
    Documentation:               NO
    Non-free algorithms:         NO

  GUI:                           QT5
    QT:                          YES (ver 5.15.0 )
      QT OpenGL support:         NO
    GTK+:                        NO
    VTK support:                 NO

  Media I/O: 
    ZLib:                        /lib64/libz.so (ver 1.2.7)
    JPEG:                        libjpeg-turbo (ver 2.1.3-62)
    WEBP:                        build (ver encoder: 0x020f)
    PNG:                         /usr/local/lib/libpng.so (ver 1.6.40)
    TIFF:                        build (ver 42 - 4.2.0)
    JPEG 2000:                   build (ver 2.5.0)
    OpenEXR:                     build (ver 2.3.0)
    HDR:                         YES
    SUNRASTER:                   YES
    PXM:                         YES
    PFM:                         YES

  Video I/O:
    DC1394:                      NO
    FFMPEG:                      YES
      avcodec:                   YES (59.37.100)
      avformat:                  YES (59.27.100)
      avutil:                    YES (57.28.100)
      swscale:                   YES (6.7.100)
      avresample:                NO
    GStreamer:                   NO
    v4l/v4l2:                    YES (linux/videodev2.h)

  Parallel framework:            pthreads

  Trace:                         YES (with Intel ITT)

  Other third-party libraries:
    Intel IPP:                   2021.8 [2021.8.0]
           at:                   /io/_skbuild/linux-x86_64-3.7/cmake-build/3rdparty/ippicv/ippicv_lnx/icv
    Intel IPP IW:                sources (2021.8.0)
              at:                /io/_skbuild/linux-x86_64-3.7/cmake-build/3rdparty/ippicv/ippicv_lnx/iw
    VA:                          NO
    Lapack:                      YES (/lib64/libopenblas.so)
    Eigen:                       NO
    Custom HAL:                  NO
    Protobuf:                    build (3.19.1)
    Flatbuffers:                 builtin/3rdparty (23.5.9)

  OpenCL:                        YES (no extra features)
    Include path:                /io/opencv/3rdparty/include/opencl/1.2
    Link libraries:              Dynamic load

  Python 3:
    Interpreter:                 /opt/python/cp37-cp37m/bin/python3.7 (ver 3.7.17)
    Libraries:                   libpython3.7m.a (ver 3.7.17)
    numpy:                       /home/ci/.local/lib/python3.7/site-packages/numpy/core/include (ver 1.17.0)
    install path:                python/cv2/python-3

  Python (for build):            /opt/python/cp37-cp37m/bin/python3.7

  Java:                          
    ant:                         NO
    Java:                        NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /io/_skbuild/linux-x86_64-3.7/cmake-install
-----------------------------------------------------------------




    


    Update

    


    I made my docker image more simpler, and therefore my question. Install ffmpeg from the repository :

    


    FROM python:3.10.12-slim-buster

RUN apt-get update

# Set the working directory in the container
WORKDIR /app

# Install ffmpeg for opencv
RUN apt-get install -y ffmpeg

# Copy the application code into the container
COPY . .

# Build opencv from source, to be able to use h264 codec.
RUN apt-get install -y cmake \
    gcc \
    g++ \
    python3-numpy \
    libavcodec-dev \
    libavformat-dev \
    libswscale-dev \
    libgstreamer-plugins-base1.0-dev \
    libgstreamer1.0-dev \
    libpng-dev \
    libjpeg-dev \
    libopenexr-dev \
    libtiff-dev \
    libwebp-dev \
    git

RUN git clone --depth 1 --branch 4.8.0 https://github.com/opencv/opencv.git && \
    git clone --depth 1 --branch 4.8.0 https://github.com/opencv/opencv_contrib.git && \
    cd opencv && \
    mkdir build && \
    cd build && \
    cmake -D CMAKE_BUILD_TYPE=Release -D OPENCV_EXTRA_MODULES_PATH=/app/opencv_contrib/modules -D OPENCV_ENABLE_NONFREE=ON ../ && \
    make -j"$(nproc)" && \
    make install

# Remove opencv github project
RUN rm -r opencv

# Remove opencv_contrib github project
RUN rm -r opencv_contrib

# Prevents Python from writing pyc files to disc
ENV PYTHONDONTWRITEBYTECODE 1

# Prevents Python from buffering stdout and stderr
ENV PYTHONUNBUFFERED 1

# Install python dependencies
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt

# Install netcat to know when rabbitmq is running
RUN apt-get install -y netcat

# Set execute permissions
RUN chmod +x entrypoint.sh
RUN chmod +x web_start.sh

ENTRYPOINT ["./entrypoint.sh"]


    


    Run the following commands inside the docker container :

    


    $ ffmpeg -version

ffmpeg version 4.1.11-0+deb10u1 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --prefix=/usr --extra-version=0+deb10u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil      56. 22.100 / 56. 22.100
libavcodec     58. 35.100 / 58. 35.100
libavformat    58. 20.100 / 58. 20.100
libavdevice    58.  5.100 / 58.  5.100
libavfilter     7. 40.101 /  7. 40.101
libavresample   4.  0.  0 /  4.  0.  0
libswscale      5.  3.100 /  5.  3.100
libswresample   3.  3.100 /  3.  3.100
libpostproc    55.  3.100 / 55.  3.100


    


    $ ffmpeg -i cf91f302-c357-49ba-b59c-bcfb8b7f4866 -vcodec libx264 -f mp4 output.mp4

ffmpeg version 4.1.11-0+deb10u1 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 8 (Debian 8.3.0-6)
  configuration: --prefix=/usr --extra-version=0+deb10u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'cf91f302-c357-49ba-b59c-bcfb8b7f4866':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2023-10-31T10:38:42.000000Z
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 13 Pro Max
    com.apple.quicktime.software: 16.6
    com.apple.quicktime.creationdate: 2023-10-31T06:38:42-0400
  Duration: 00:00:04.23, start: 0.000000, bitrate: 15915 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 15767 kb/s, 30 fps, 30 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
      rotate          : 90
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Video
      encoder         : H.264
    Side data:
      displaymatrix: rotation of -90.00 degrees
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 89 kb/s (default)
    Metadata:
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Audio
    Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Metadata
    Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
    Metadata:
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Metadata
    Stream #0:4(und): Data: none (mebx / 0x7862656D), 34 kb/s (default)
    Metadata:
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Metadata
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x55db965ee980] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x55db965ee980] profile High, level 4.0
[libx264 @ 0x55db965ee980] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    com.apple.quicktime.creationdate: 2023-10-31T06:38:42-0400
    com.apple.quicktime.make: Apple
    com.apple.quicktime.model: iPhone 13 Pro Max
    com.apple.quicktime.software: 16.6
    encoder         : Lavf58.20.100
    Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1080x1920, q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc58.35.100 libx264
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Video
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
      displaymatrix: rotation of -0.00 degrees
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
    Metadata:
      creation_time   : 2023-10-31T10:38:42.000000Z
      handler_name    : Core Media Audio
      encoder         : Lavc58.35.100 aac
frame=  127 fps= 27 q=-1.0 Lsize=    2005kB time=00:00:04.24 bitrate=3866.2kbits/s speed=0.909x    
video:1964kB audio:36kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.282549%
[libx264 @ 0x55db965ee980] frame I:1     Avg QP:21.43  size: 36791
[libx264 @ 0x55db965ee980] frame P:59    Avg QP:23.61  size: 22380
[libx264 @ 0x55db965ee980] frame B:67    Avg QP:24.20  size:  9743
[libx264 @ 0x55db965ee980] consecutive B-frames: 20.5% 22.0% 16.5% 40.9%
[libx264 @ 0x55db965ee980] mb I  I16..4: 29.4% 58.6% 11.9%
[libx264 @ 0x55db965ee980] mb P  I16..4: 15.0% 21.8%  1.3%  P16..4: 26.1%  7.5%  3.1%  0.0%  0.0%    skip:25.2%
[libx264 @ 0x55db965ee980] mb B  I16..4:  1.9%  1.7%  0.1%  B16..8: 36.3%  3.6%  0.5%  direct: 3.9%  skip:52.1%  L0:42.9% L1:52.1% BI: 5.0%
[libx264 @ 0x55db965ee980] 8x8 transform intra:56.2% inter:86.6%
[libx264 @ 0x55db965ee980] coded y,uvDC,uvAC intra: 19.5% 27.3% 2.1% inter: 11.7% 18.9% 0.1%
[libx264 @ 0x55db965ee980] i16 v,h,dc,p: 25% 54%  8% 12%
[libx264 @ 0x55db965ee980] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 25% 44%  1%  2%  2%  2%  1%  1%
[libx264 @ 0x55db965ee980] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 45% 13%  2%  7%  6%  6%  3%  3%
[libx264 @ 0x55db965ee980] i8c dc,h,v,p: 62% 27% 10%  1%
[libx264 @ 0x55db965ee980] Weighted P-Frames: Y:3.4% UV:0.0%
[libx264 @ 0x55db965ee980] ref P L0: 65.2% 18.0% 12.2%  4.6%  0.1%
[libx264 @ 0x55db965ee980] ref B L0: 89.1%  9.3%  1.6%
[libx264 @ 0x55db965ee980] ref B L1: 97.2%  2.8%
[libx264 @ 0x55db965ee980] kb/s:3798.37
[aac @ 0x55db965edf00] Qavg: 125.454


    


    The errors persist.

    


    >>> import cv2
>>> out = cv2.VideoWriter("./out.mp4", cv2.VideoWriter_fourcc(*'avc1'), 30, (800, 600), True)
[ERROR:0@91.872] global cap_ffmpeg_impl.hpp:3018 open Could not find encoder for codec_id=27, error: Encoder not found
[ERROR:0@91.872] global cap_ffmpeg_impl.hpp:3093 open VIDEOIO/FFMPEG: Failed to initialize VideoWriter


    


    Could someone please tell me what I'm doing wrong ?

    


  • Python : ani.save very slow. Any alternatives to create videos ?

    14 novembre 2023, par Czeskleba

    Im doing some simple diffusion calculations. I save 2 matrices to 2 datasets every so many steps (every 2s or so) to a single .h5 file. After that I then load the file in another script, create some figures (2 subplots etc., see/run code - i know could be prettier). Then I use matplotlib.animation to make the animation. In the code below, in the very last lines, I then run the ani.save command from matplotlib.

    


    And that's where the problem is. The animation is created within 2 seconds, even for my longer animations (14.755 frames, done in under 2s at 8284 it/s) but after that, ani.save in line 144 takes forever (it didn't finish over night). It reserves/uses about 10gb of my RAM constantly but seemingly takes forever. If you run the code below be sure to set the frames_to_do (line 20) to something like 30 or 60 to see that it does in fact save an mp4 for shorter videos. You can set it higher to see how fast the time to save stuff increases to something unreasonable.

    


    I've been fiddling this for 2 days now and I cant figure it out. I guess my question is : Is there any way to create the video in a reasonable time like this ? Or do I need something other than animation ?

    


    You should be able to just run the code. Ill provide a diffusion_array.h5 with 140 frames so you dont have to create a dummy file, if I can figure out how to upload something like this safely. (The results are with dummy numbers for now, diffusion coefficients etc. are not right yet.)
I used dropbox. Not sure if thats allowed, if not I'll delete the link and uhh PM me or something ?

    


    https://www.dropbox.com/scl/fi/fv9stfqkm4trmt3zwtvun/diffusion_array.h5?rlkey=2oxuegnlcxq0jt6ed77rbskyu&dl=0

    


    Here is the code :

    


    import h5py
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
from matplotlib.animation import FuncAnimation
from tqdm import tqdm
import numpy as np


# saving the .mp4 after tho takes forever

# Create an empty figure and axis
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 9), dpi=96)

# Load all saved arrays into a list
file_name = 'diffusion_array.h5'
loaded_u_arrays = []
loaded_h_arrays = []
frames_to_do = 14755  # for now like this, use # version once the slow mp4 convert is cleared up

# with h5py.File(file_name, 'r') as hf:
#     for key in hf.keys():
#         if key.startswith('u_snapshot_'):
#             loaded_u_arrays.append(hf[key][:])
#         elif key.startswith('h_snapshot_'):
#             loaded_h_arrays.append(hf[key][:])

with h5py.File(file_name, 'r') as hf:
    for i in range(frames_to_do):
        target_key1 = f'u_snapshot_{i:05d}'
        target_key2 = f'h_snapshot_{i:05d}'
        if target_key1 in hf:
            loaded_u_arrays.append(hf[target_key1][:])
        else:
            print(f'Dataset u for time step {i} not found in the file.')
        if target_key2 in hf:
            loaded_h_arrays.append(hf[target_key2][:])
        else:
            print(f'Dataset h for time step {i} not found in the file.')

# Create "empty" imshow objects
# First one
norm1 = mcolors.Normalize(vmin=140, vmax=400)
cmap1 = plt.get_cmap('hot')
cmap1.set_under('0.85')
im1 = ax1.imshow(loaded_u_arrays[0], cmap=cmap1, norm=norm1)
ax1.set_title('Diffusion Heatmap')
ax1.set_xlabel('X')
ax1.set_ylabel('Y')
cbar_ax = fig.add_axes([0.05, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('$T$ / K', labelpad=20)
fig.colorbar(im1, cax=cbar_ax)


# Second one
ax2 = plt.subplot(1, 2, 2)
norm2 = mcolors.Normalize(vmin=-0.1, vmax=5)
cmap2 = plt.get_cmap('viridis')
cmap2.set_under('0.85')
im2 = ax2.imshow(loaded_h_arrays[0], cmap=cmap2, norm=norm2)
ax2.set_title('Diffusion Hydrogen')
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
cbar_ax = fig.add_axes([0.9, 0.15, 0.03, 0.7])
cbar_ax.set_xlabel('HD in ml/100g', labelpad=20)
fig.colorbar(im2, cax=cbar_ax)

# General
fig.subplots_adjust(right=0.85)
time_text = ax2.text(-15, 0.80, f'Time: {0} s', transform=plt.gca().transAxes, color='black', fontsize=20)

# Annotations
# Heat 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax1.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
            edgecolors=marker_style['markeredgecolor'])
ann_heat1 = ax1.annotate(f'Temp: {loaded_u_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
             xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Heat 2
ax1.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
            edgecolors=marker_style['markeredgecolor'])
ann_heat2 = ax1.annotate(f'Temp: {loaded_u_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
             xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')

# Diffusion 1
marker_style = dict(marker='o', markersize=6, markerfacecolor='black', markeredgecolor='black')
ax2.scatter(*[10, 40], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
            edgecolors=marker_style['markeredgecolor'])
ann_diff1 = ax2.annotate(f'HD: {loaded_h_arrays[0][40, 10]:.0f}', xy=[10, 40], xycoords='data',
             xytext=([10, 40][0], [10, 40][1] + 48), textcoords='data',
             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')
# Diffusion 2
ax2.scatter(*[140, 85], s=marker_style['markersize'], c=marker_style['markerfacecolor'],
            edgecolors=marker_style['markeredgecolor'])
ann_diff2 = ax2.annotate(f'HD: {loaded_h_arrays[0][85, 140]:.0f}', xy=[140, 85], xycoords='data',
             xytext=([140, 85][0] + 55, [140, 85][1] + 3), textcoords='data',
             arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.3"), fontsize=12, color='black')


# Function to update the animation
def update(frame, *args):
    loaded_u_array, loaded_h_array = args

    s_per_frame = 2  # during weld/cooling you save a state every 2s
    frames_to_room_temp = 7803  # that means this many frames need to be animated
    dt_big = 87  # during "just diffusion" you save every 10 frame but 87s pass in those

    # Update the time step shown
    if frame <= frames_to_room_temp:
        im1.set_data(loaded_u_array[frame])
        im2.set_data(loaded_h_array[frame])
        time_text.set_text(f'Time: {frame * s_per_frame} s')

    else:
        im1.set_data(loaded_u_array[frame])
        im2.set_data(loaded_h_array[frame])
        calc_time = int(((2 * frames_to_room_temp) + (frame - frames_to_room_temp) * 87) / 3600)
        time_text.set_text(f'Time: {calc_time} s')

    # Annotate some points
    ann_heat1.set_text(f'Temp: {loaded_u_arrays[frame][40, 10]:.0f}')
    ann_heat2.set_text(f'Temp: {loaded_u_arrays[frame][85, 140]:.0f}')
    ann_diff1.set_text(f'HD: {loaded_h_arrays[frame][40, 10]:.0f}')
    ann_diff2.set_text(f'HD: {loaded_h_arrays[frame][85, 140]:.0f}')

    return im1, im2  # Return the updated artists


# Create the animation without displaying it
ani = FuncAnimation(fig, update, frames=frames_to_do, repeat=False, blit=True, interval=1,
                    fargs=(loaded_u_arrays, loaded_h_arrays))  # frames=len(loaded_u_arrays)

# Create the progress bar with tqdm
with tqdm(total=frames_to_do, desc='Creating Animation') as pbar:  # total=len(loaded_u_arrays)
    for i in range(frames_to_do):  # for i in range(len(loaded_u_arrays)):
        update(i, loaded_u_arrays, loaded_h_arrays)  # Manually update the frame with both datasets
        pbar.update(1)  # Update the progress bar

# Save the animation as a video file (e.g., MP4)
print("Converting to .mp4 now. This may take some time. This is normal, wait for Python to finish this process.")
ani.save('diffusion_animation.mp4', writer='ffmpeg', dpi=96, fps=60)

# Close the figure to prevent it from being displayed
plt.close(fig)



    


  • My SBC Collection

    31 décembre 2023, par Multimedia Mike — General

    Like many computer nerds in the last decade, I have accumulated more than a few single-board computers, or “SBCs”, which are small computers based around a system-on-a-chip (SoC) that nearly always features an ARM CPU at its core. Surprisingly few of these units are Raspberry Pi units, though that brand has come to exemplify and dominate the product category.

    Also, as is the case for many computer nerds, most of these SBCs lay fallow for years at a time. Equipped with an inexpensive lightbox that I procured in the last year, I decided I could at least create glamour shots of various units and catalog them in a blog post.

    While Raspberry Pi still enjoys the most mindshare far and away, and while I do have a few Raspberry Pi units in my inventory, I have always been a bigger fan of the ODROID brand, which works with convenient importers around the world (in the USA, I can vouch for Ameridroid, to whom I’ve forked over a fair amount of cash for these computing toys).

    As mentioned, Raspberry Pi undisputedly has the most mindshare of all these SBC brands and I often wonder why… and then I immediately remind myself that it has the biggest ecosystem, and has a variety of turnkey projects and applications (such as Pi-hole and PiVPN) that promise a lower barrier to entry — as well as a slightly lower price point — than some of these other options. ODROID had a decent ecosystem for awhile, especially considering the monthly ODROID Magazine, though that ceased publication in July 2020. The Raspberry Pi and its variants were famously difficult to come by due to the global chip shortage from 2021-2023. Meanwhile, I had no trouble procuring these boards during the same timeframe.

    So let’s delve into the collection…

    Cubieboard
    The Raspberry Pi came out in 2012 and by 2013 I was somewhat coveting one to hack on. Finally ! An accessible ARM platform to play with. I had heard of the BeagleBoard for years but never tried to get my hands on one. I was thinking about taking the plunge on a new Raspberry Pi, but a colleague told me I should skip that and go with this new hotness called the Cubieboard, based on an Allwinner SoC. The big value-add that this board had vs. a Raspberry Pi was that it had a SATA adapter. Although now that it has been a decade, it only now occurs to me to quander whether it was true SATA or a USB-to-SATA bridge. Looking it up now, I’m led to believe that the SoC supported the functionality natively.

    Anyway, I did get it up and running but never did much with it, thus setting the tone for future SBC endeavors. No photos because I gave it to another tech enthusiast years ago, whose SBC collection dwarfs my own.

    ODROID-XU4
    I can’t recall exactly when or how I first encountered the ODROID brand. I probably read about it on some enthusiast page or another circa 2014 and decided to try one out. I eventually acquired a total of 3 of these ODROID-XU4 units, each with a different case, 1 with a fan and 2 passively-cooled :

    Collection of ODROID-XU4 SBCs

    Collection of ODROID-XU4 SBCs

    This is based on the Samsung Exynos 5422 SoC, the same series as was used in their Note 3 phone released in 2013. It has been a fun chip to play with. The XU4 was also my first introduction to the eMMC storage solution that is commonly supported on the ODROID SBCs (alongside micro-SD). eMMC offers many benefits over SD in terms of read/write speed as well as well as longevity/write cycles. That’s getting less relevant these days, however, as more and more SBCs are being released with direct NVMe SSD support.

    I had initially wanted to make a retro-gaming device built on this platform (see the handheld section later for more meditations on that). In support of this common hobbyist goal, there is this nifty case XU4 case which apes the aesthetic of the Nintendo N64 :

    ODROID-XU4 N64-style case

    ODROID-XU4 N64-style case

    It even has a cool programmable LCD screen. Maybe one day I’ll find a use for it.

    For awhile, one of these XU4 units (likely the noisy, fan-cooled one) was contributing results to the FFmpeg FATE system.

    While it features gigabit ethernet and a USB3 port, I once tried to see if I could get 2 Gbps throughput with the unit using a USB3-gigabit dongle. I had curious results in that the total amount of traffic throughput could never exceed 1 Gbps across both interfaces. I.e., if 1 interface was dealing with 1 Gbps and the other interface tried to run at 1 Gbps, they would both only run at 500 Mbps. That remains a mystery to me since I don’t see that limitation with Intel chips.

    Still, the XU4 has been useful for a variety of projects and prototyping over the years.

    ODROID-HC2 NAS
    I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered. This is the ODROID-HC2 (the “HC” stands for “Home Cloud”) :

    ODROID-HC2 NAS

    ODROID-HC2 NAS

    It has the same guts as the ODROID-XU4 except no video output and the USB3 function is leveraged for a SATA bridge. This allows you to plug a SATA hard drive directly into the unit :

    ODROID-HC2 NAS uncovered

    ODROID-HC2 NAS uncovered

    Believe it or not, this has been my home NAS solution for something like 6 or 7 years now– I don’t clearly remember when I purchased it and put it into service.

    But isn’t this sort of irresponsible ? What about a failure of the main drive ? That’s why I have an external drive connected for backing up the most important data via rsync :

    ODROID-HC2 NAS backup enclosure

    ODROID-HC2 NAS backup enclosure

    The power consumption can’t be beat– Profiling for a few weeks of average usage worked out to 4.5 kWh for the ODROID-HC2… per month.

    ODROID-C2
    I was on a kick of ordering more SBCs at one point. This is the ODROID-C2, equipped with a 64-bit Amlogic SoC :

    ODROID-C2

    ODROID-C2

    I had this on the FATE farm for awhile, performing 64-bit ARM builds (vs. the XU4’s 32-bit builds). As memory serves, it was unreliable and would occasionally freeze up.

    Here is a view of the eMMC storage through the bottom of the translucent case :

    Bottom of ODROID-C2 with view of eMMC storage

    Bottom of ODROID-C2 with view of eMMC storage

    ODROID-N2+
    Out of all my ODROID SBCs, this is the unit that I long to “get back to” the most– the ODROID-N2+ :

    ODROID-N2+

    ODROID-N2+

    Very capable unit that makes a great little desktop. I have some projects I want to develop using it so that it will force me to have a focused development environment.

    Raspberry Pi
    Eventually, I did break down and get a Raspberry Pi. I had a specific purpose in mind and, much to my surprise, I have stuck to it :

    Original Raspberry Pi

    Original Raspberry Pi

    I was using one of the ODROID-XU4 units as a VPN gateway. Eventually, I wanted to convert the XU4 to something else and I decided to run the VPN gateway as an appliance on the simplest device I could. So I procured this complete hand-me-down unit from eBay and went to work. This was also the first time I discovered the DietPi distribution and this box has been in service running Wireguard via PiVPN for many years.

    I also have a Raspberry Pi 3B+ kicking around somewhere. I used it as a Steam Link device for awhile.

    SOPINE + Baseboard
    Also procured when I was on this “let’s buy random SBCs” kick. The Pine64 SOPINE is actually a compute module that comes in the form factor of a memory module.

    Pine64 SOPINE Compute Module

    Pine64 SOPINE Compute Module

    Back to using Allwinner SoCs. In order to make this thing useful, you need to place it in something. It’s possible to get a mini-ITX form factor board that can accommodate 7 of these modules. Before going to that extreme, there is this much simpler baseboard which can also use eMMC for storage.

    Baseboard with SOPINE, eMMC, and heat sinks

    Baseboard with SOPINE, eMMC, and heat sinks

    I really need to find an appropriate case for this one as it currently performs its duty while sitting on an anti-static bag.

    NanoPi NEO3
    I enjoy running the DietPi distribution on many of these SBCs (as it’s developed not just for Raspberry Pi). I have also found their website to be a useful resource for discovering new SBCs. That’s how I found the NanoPi series and zeroed in on this NEO3 unit, sporting a Rockchip SoC, and photographed here with some American currency in order to illustrate its relative size :

    NanoPi NEO3

    NanoPi NEO3

    I often forget about this computer because it’s off in another room, just quietly performing its assigned duty.

    MangoPi MQ-Pro
    So far, I’ve heard of these fruits prepending the Greek letter pi for naming small computing products :

    • Raspberry – the O.G.
    • Banana – seems to be popular for hobbyist router/switches
    • Orange
    • Atomic
    • Nano
    • Mango

    Okay, so the AtomicPi and NanoPi names don’t really make sense considering the fruit convention.

    Anyway, the newest entry is the MangoPi. These showed up on Ameridroid a few months ago. There are 2 variants : the MQ-Pro and the MQ-Quad. I picked one and rolled with it.

    MangoPi MQ-Pro pieces arrive

    MangoPi MQ-Pro pieces arrive

    When it arrived, I unpacked it, assembled the pieces, downloaded a distro, tossed that on a micro-SD card, connected a monitor and keyboard to it via its USB-C port, got the distro up and running, configured the wireless networking with a static IP address and installed sshd, and it was ready to go as a headless server for an edge application.

    MangoPi MQ-Pro components, ready for assembly

    MangoPi MQ-Pro components, ready for assembly

    The unit came with no instructions that I can recall. After I got it set up, I remember thinking, “What is wrong with me ? Why is it that I just know how to do all of this without any documentation ?”

    MangoPi MQ-Pro in first test

    MangoPi MQ-Pro in first test

    Only after I got it up and running and poked around a bit did I realize that this SBC doesn’t have an ARM SoC– it’s a RISC-V SoC. It uses the Allwinner D1, so it looks like I came full circle back to Allwinner.

    MangoPi MQ-Pro with more US coinage for scale

    MangoPi MQ-Pro with more US coinage for scale

    So I now have my first piece of RISC-V hobbyist kit, although I learned recently from Kostya that it’s not that great for multimedia.

    Handheld Gaming Units
    The folks at Hardkernel have also produced a series of handheld retro-gaming devices called ODROID-GO. The first one resembled the original Nintendo Game Boy, came as a kit to be assembled, and emulated 5 classic consoles. It also had some hackability to it. Quite a cool little device, and inexpensive too. I have since passed it along to another gaming enthusiast.

    Later came the ODROID-GO Advance, also a kit, but emulating more devices. I was extremely eager to get my hands on this since it could emulate SNES in addition to NES. It also features a headphone jack, unlike the earlier model. True to form, after I received mine, it took me about 13 months before I got around to assembling it. After that, the biggest challenge I had was trying to find an appropriate case for it.

    ODROID-GO Advance with case and headphones

    ODROID-GO Advance with case and headphones

    Even though it may try to copy the general aesthetic and form factor of the Game Boy Advance, cases for the GBA don’t fit this correctly.

    Further, Hardkernel have also released the ODROID-GO Super and Ultra models that do more and more. The Advance, Super, and Ultra models have powerful SoCs and feature much more hackability than the first ODROID-GO model.

    I know that the guts of the Advance have been used in other products as well. The same is likely true for the Super and Ultra.

    Ultimately, the ODROID-GO Advance was just another project I assembled and then set aside since I like the idea of playing old games much more than actually doing it. Plus, the fact has finally crystalized in my mind over the past few years that I have never enjoyed handheld gaming and likely will never enjoy handheld gaming, even after I started wearing glasses. Not that I’m averse to old Game Boy / Color / Advance games, but if I’m going to play them, I’d rather emulate them on a large display.

    The Future
    In some of my weaker moments, I consider ordering up certain Banana Pi products (like the Banana Pi BPI-R2) with a case and doing my own router tricks using some open source router/firewall solution. And then I remind myself that my existing prosumer-type home router is doing just fine. But maybe one day…

    The post My SBC Collection first appeared on Breaking Eggs And Making Omelettes.