Compare commits

...

37 Commits
v0.7 ... master

Author SHA1 Message Date
Jonas Letzbor ce755cb1a3
Change H264 encoder from baseline to main 2024-06-11 22:25:48 +02:00
Attila Fidan 115346f074 server: Support opening from a bound socket fd
Add a `struct nvnc* nvnc_open_from_fd(int fd)` function which takes an
existing connection-based socket file descriptor bound by the library
user or a parent process and just calls listen() on it, as an
alternative to letting neatvnc handle socket configuration.
2024-06-02 09:44:28 +00:00
Andri Yngvason 0e93aa969f Implement qemu/vmware LED state 2024-04-07 12:28:37 +00:00
Andri Yngvason a77b99f2b4 FUNDING.yml: Add github sponsors 2024-03-26 10:39:54 +00:00
Andri Yngvason 47e714b2bf h264-v4l2m2m: Update copyright
Raspberry Pi Ltd. have kindly allowed me to retain copyright for this work
that they sponsored.
2024-03-26 10:38:05 +00:00
Andri Yngvason 08d0c64ff9 Implement v4l2m2m h264 encoder 2024-03-17 13:53:20 +00:00
Andri Yngvason 0bf53a4843 Create abstract h264 encoder interface 2024-03-17 13:53:20 +00:00
Andri Yngvason b043f004a8 Rename h264-encoder.c -> h264-encoder-ffmpeg-impl.c 2024-03-17 13:53:20 +00:00
Alfred Wingate d95b678d7a server: Remove undeclared variable from tracing macro
* 3647457f6d accidentally referred to
  an nonexistant variable. This leads to build failure if you then
  enable the systemtap feature.
* Downstream bug https://bugs.gentoo.org/902141

Signed-off-by: Alfred Wingate <parona@protonmail.com>
2024-02-26 12:41:15 +00:00
Andri Yngvason 14b78d26d3 meson: Bump minor version to 0.9 2024-02-25 11:13:11 +00:00
Andri Yngvason 46432ce8ca Release v0.8.0 2024-02-25 11:11:28 +00:00
Andri Yngvason c22a0c0379 Add option to enable experimental features
With this, I can release without modifying things specially on the release
branch.
2024-02-25 10:59:56 +00:00
Andri Yngvason dedac2f82f Implement colour map
Instead of dropping the connection, we now implement a simple static
colour map that emulates RGB332.

The quality isn't great, but it's better than dropping the connection
without any explanation.
2024-02-20 21:59:51 +00:00
Andri Yngvason d654e06eea Consolidate security handshake result handling 2024-02-15 10:05:21 +00:00
Andri Yngvason 1647505e94 server: Extract encoder initialisation function 2024-02-02 22:35:23 +00:00
Andri Yngvason 9fa1027353 server: Drop current frame if formats change
If the currently in-flight frame was dispatched before a format change,
it might be the wrong format for the client, so it's better to drop it.
2024-02-02 22:24:03 +00:00
Andri Yngvason ef106b92f1 server: Log encodings reported by client 2024-02-02 22:22:03 +00:00
Andri Yngvason f1a6710bba server: Log pixel format choice 2024-02-02 22:17:52 +00:00
Andri Yngvason 584fb77cc8 pixels: Add strings for RGB222 and BGR222 2024-02-02 22:14:28 +00:00
Andri Yngvason c7d7929f7c Keep zlib streams when switching encodings
Both RealVNC and TigerVNC clients expect zlib streams to remain when
switching encodings, so when they switch back, inflate fails if the
encoder is discared.

fixes #109
2024-02-02 22:10:01 +00:00
Andri Yngvason 58509ca889 Warn when client chooses non-true-color pixel format
This at least lets the user know why things failed.
2024-01-30 21:50:23 +00:00
Andri Yngvason 65fc23c88d server: Allow server to request more than 32 encodings
fixes #108
2024-01-24 18:29:50 +00:00
Andri Yngvason ddd5ee123e h264-encoder: Use AV_FRAME_FLAG_KEY instead of key_frame 2024-01-01 12:27:20 +00:00
Andri Yngvason f503cbef25 Replace nvnc_client_get_hostname with nvnc_client_get_address
This is a more accurate name for what is returned since
c76129b2d2.
2023-12-31 17:59:52 +00:00
Andri Yngvason 524f9e0399 logging: Set log function to default when unset 2023-12-26 15:11:28 +00:00
Andri Yngvason 4691a35b7b logging: Add method to set thread local log function
This allows the user to override the log function in the current thread
without receiving log messages from concurrent tasks.
2023-12-26 11:58:35 +00:00
Andri Yngvason a7f6c50d6d logging: Export default log function
This allows users to intercept log messages without fully overriding the
default log handler.
2023-12-26 11:30:21 +00:00
Andri Yngvason d80b51f650 server: Don't complete fb update more than once
If stream_send in finish_fb_update returns -1, then complete_fb_update
will be called there and in the callback to stream_send.
2023-11-19 20:28:43 +00:00
Andri Yngvason c76129b2d2 server: Remove DNS lookup
DNS lookup is slow and can even fail. Under some circumstances, it will
even block for a significant amount of time until it completes.

The user of this library can do the lookup instead if they wish.
2023-11-05 10:29:04 +00:00
Andri Yngvason 0e262c8f33 crypto: Initialise AES-ECB decode context correctly
This fixes Apple DH
2023-11-04 23:13:12 +00:00
Andri Yngvason 175d53bc41 server: Fix double-free on failed Apple DH 2023-11-04 23:10:15 +00:00
Andri Yngvason 6beb263027 Don't use tag for git version 2023-10-09 22:54:18 +00:00
Andri Yngvason a631809cbb README: Enumerate dependencies for crypto 2023-10-06 20:44:27 +00:00
Philipp Zabel 5b4141ac1d Remove superfluous whitespace
Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com>
2023-10-06 20:41:30 +00:00
Philipp Zabel bc3a47a654 Indent wrapped argument lists with two tabs (function calls)
Do not align wrapped function argument lists with the opening
parenthesis. Indent them with two tabs.

Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com>
2023-10-06 20:41:30 +00:00
Philipp Zabel f04284351e Indent wrapped argument lists with two tabs (function definitions)
Do not align wrapped function argument lists with the opening
parenthesis. Indent them with two tabs.

Signed-off-by: Philipp Zabel <philipp.zabel@gmail.com>
2023-10-06 20:41:30 +00:00
Andri Yngvason 457737de6c Set version for next release 2023-10-04 22:46:37 +00:00
34 changed files with 1932 additions and 808 deletions

1
.gitignore vendored
View File

@ -8,3 +8,4 @@ build
experiments experiments
subprojects subprojects
sandbox sandbox
.vscode

View File

@ -1 +1,2 @@
github: any1
patreon: andriyngvason patreon: andriyngvason

View File

@ -18,6 +18,9 @@ neat.
* gnutls (optional) * gnutls (optional)
* libdrm (optional) * libdrm (optional)
* libturbojpeg (optional) * libturbojpeg (optional)
* nettle (optional)
* hogweed (optional)
* gmp (optional)
* pixman * pixman
* zlib * zlib

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2019 - 2020 Andri Yngvason * Copyright (c) 2019 - 2024 Andri Yngvason
* *
* Permission to use, copy, modify, and/or distribute this software for any * Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -83,7 +83,6 @@ struct nvnc_client {
struct nvnc_common common; struct nvnc_common common;
int ref; int ref;
struct stream* net_stream; struct stream* net_stream;
char hostname[256];
char username[256]; char username[256];
struct nvnc* server; struct nvnc* server;
enum nvnc_client_state state; enum nvnc_client_state state;
@ -105,8 +104,13 @@ struct nvnc_client {
struct cut_text cut_text; struct cut_text cut_text;
bool is_ext_notified; bool is_ext_notified;
struct encoder* encoder; struct encoder* encoder;
struct encoder* zrle_encoder;
struct encoder* tight_encoder;
uint32_t cursor_seq; uint32_t cursor_seq;
int quality; int quality;
bool formats_changed;
enum nvnc_keyboard_led_state led_state;
enum nvnc_keyboard_led_state pending_led_state;
#ifdef HAVE_CRYPTO #ifdef HAVE_CRYPTO
struct crypto_key* apple_dh_secret; struct crypto_key* apple_dh_secret;
@ -127,6 +131,7 @@ enum nvnc__socket_type {
NVNC__SOCKET_TCP, NVNC__SOCKET_TCP,
NVNC__SOCKET_UNIX, NVNC__SOCKET_UNIX,
NVNC__SOCKET_WEBSOCKET, NVNC__SOCKET_WEBSOCKET,
NVNC__SOCKET_FROM_FD,
}; };
struct nvnc { struct nvnc {

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2021 - 2022 Andri Yngvason * Copyright (c) 2021 - 2024 Andri Yngvason
* *
* Permission to use, copy, modify, and/or distribute this software for any * Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -17,13 +17,28 @@
#include <stdint.h> #include <stdint.h>
#include <unistd.h> #include <unistd.h>
#include <stdbool.h>
struct h264_encoder;
struct nvnc_fb; struct nvnc_fb;
struct h264_encoder;
typedef void (*h264_encoder_packet_handler_fn)(const void* payload, size_t size, typedef void (*h264_encoder_packet_handler_fn)(const void* payload, size_t size,
uint64_t pts, void* userdata); uint64_t pts, void* userdata);
struct h264_encoder_impl {
struct h264_encoder* (*create)(uint32_t width, uint32_t height,
uint32_t format, int quality);
void (*destroy)(struct h264_encoder*);
void (*feed)(struct h264_encoder*, struct nvnc_fb*);
};
struct h264_encoder {
struct h264_encoder_impl *impl;
h264_encoder_packet_handler_fn on_packet_ready;
void* userdata;
bool next_frame_should_be_keyframe;
};
struct h264_encoder* h264_encoder_create(uint32_t width, uint32_t height, struct h264_encoder* h264_encoder_create(uint32_t width, uint32_t height,
uint32_t format, int quality); uint32_t format, int quality);

View File

@ -22,6 +22,7 @@
#include <stdarg.h> #include <stdarg.h>
#include <stdlib.h> #include <stdlib.h>
#include <assert.h> #include <assert.h>
#include <sys/socket.h>
#define NVNC_NO_PTS UINT64_MAX #define NVNC_NO_PTS UINT64_MAX
@ -85,6 +86,12 @@ enum nvnc_transform {
NVNC_TRANSFORM_FLIPPED_270 = 7, NVNC_TRANSFORM_FLIPPED_270 = 7,
}; };
enum nvnc_keyboard_led_state {
NVNC_KEYBOARD_LED_SCROLL_LOCK = 1 << 0,
NVNC_KEYBOARD_LED_NUM_LOCK = 1 << 1,
NVNC_KEYBOARD_LED_CAPS_LOCK = 1 << 2,
};
enum nvnc_log_level { enum nvnc_log_level {
NVNC_LOG_PANIC = 0, NVNC_LOG_PANIC = 0,
NVNC_LOG_ERROR = 1, NVNC_LOG_ERROR = 1,
@ -131,6 +138,7 @@ extern const char nvnc_version[];
struct nvnc* nvnc_open(const char* addr, uint16_t port); struct nvnc* nvnc_open(const char* addr, uint16_t port);
struct nvnc* nvnc_open_unix(const char *addr); struct nvnc* nvnc_open_unix(const char *addr);
struct nvnc* nvnc_open_websocket(const char* addr, uint16_t port); struct nvnc* nvnc_open_websocket(const char* addr, uint16_t port);
struct nvnc* nvnc_open_from_fd(int fd);
void nvnc_close(struct nvnc* self); void nvnc_close(struct nvnc* self);
void nvnc_add_display(struct nvnc*, struct nvnc_display*); void nvnc_add_display(struct nvnc*, struct nvnc_display*);
@ -141,13 +149,17 @@ void* nvnc_get_userdata(const void* self);
struct nvnc* nvnc_client_get_server(const struct nvnc_client* client); struct nvnc* nvnc_client_get_server(const struct nvnc_client* client);
bool nvnc_client_supports_cursor(const struct nvnc_client* client); bool nvnc_client_supports_cursor(const struct nvnc_client* client);
const char* nvnc_client_get_hostname(const struct nvnc_client* client); int nvnc_client_get_address(const struct nvnc_client* client,
struct sockaddr* restrict addr, socklen_t* restrict addrlen);
const char* nvnc_client_get_auth_username(const struct nvnc_client* client); const char* nvnc_client_get_auth_username(const struct nvnc_client* client);
struct nvnc_client* nvnc_client_first(struct nvnc* self); struct nvnc_client* nvnc_client_first(struct nvnc* self);
struct nvnc_client* nvnc_client_next(struct nvnc_client* client); struct nvnc_client* nvnc_client_next(struct nvnc_client* client);
void nvnc_client_close(struct nvnc_client* client); void nvnc_client_close(struct nvnc_client* client);
void nvnc_client_set_led_state(struct nvnc_client*,
enum nvnc_keyboard_led_state);
void nvnc_set_name(struct nvnc* self, const char* name); void nvnc_set_name(struct nvnc* self, const char* name);
void nvnc_set_key_fn(struct nvnc* self, nvnc_key_fn); void nvnc_set_key_fn(struct nvnc* self, nvnc_key_fn);
@ -235,6 +247,9 @@ void nvnc_set_cursor(struct nvnc*, struct nvnc_fb*, uint16_t width,
uint16_t height, uint16_t hotspot_x, uint16_t hotspot_y, uint16_t height, uint16_t hotspot_x, uint16_t hotspot_y,
bool is_damaged); bool is_damaged);
void nvnc_default_logger(const struct nvnc_log_data* meta, const char* message);
void nvnc_set_log_fn(nvnc_log_fn); void nvnc_set_log_fn(nvnc_log_fn);
void nvnc_set_log_fn_thread_local(nvnc_log_fn fn);
void nvnc_set_log_level(enum nvnc_log_level); void nvnc_set_log_level(enum nvnc_log_level);
void nvnc__log(const struct nvnc_log_data*, const char* fmt, ...); void nvnc__log(const struct nvnc_log_data*, const char* fmt, ...);

View File

@ -22,6 +22,7 @@
#include <stdbool.h> #include <stdbool.h>
struct rfb_pixel_format; struct rfb_pixel_format;
struct rfb_set_colour_map_entries_msg;
void pixel_to_cpixel(uint8_t* restrict dst, void pixel_to_cpixel(uint8_t* restrict dst,
const struct rfb_pixel_format* dst_fmt, const struct rfb_pixel_format* dst_fmt,
@ -41,3 +42,4 @@ bool extract_alpha_mask(uint8_t* dst, const void* src, uint32_t format,
const char* drm_format_to_string(uint32_t fmt); const char* drm_format_to_string(uint32_t fmt);
const char* rfb_pixfmt_to_string(const struct rfb_pixel_format* fmt); const char* rfb_pixfmt_to_string(const struct rfb_pixel_format* fmt);
void make_rgb332_pal8_map(struct rfb_set_colour_map_entries_msg* msg);

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2019 - 2022 Andri Yngvason * Copyright (c) 2019 - 2024 Andri Yngvason
* *
* Permission to use, copy, modify, and/or distribute this software for any * Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -69,9 +69,11 @@ enum rfb_encodings {
RFB_ENCODING_CURSOR = -239, RFB_ENCODING_CURSOR = -239,
RFB_ENCODING_DESKTOPSIZE = -223, RFB_ENCODING_DESKTOPSIZE = -223,
RFB_ENCODING_QEMU_EXT_KEY_EVENT = -258, RFB_ENCODING_QEMU_EXT_KEY_EVENT = -258,
RFB_ENCODING_QEMU_LED_STATE = -261,
RFB_ENCODING_EXTENDEDDESKTOPSIZE = -308, RFB_ENCODING_EXTENDEDDESKTOPSIZE = -308,
RFB_ENCODING_PTS = -1000, RFB_ENCODING_PTS = -1000,
RFB_ENCODING_NTP = -1001, RFB_ENCODING_NTP = -1001,
RFB_ENCODING_VMWARE_LED_STATE = 0x574d5668,
}; };
#define RFB_ENCODING_JPEG_HIGHQ -23 #define RFB_ENCODING_JPEG_HIGHQ -23
@ -114,6 +116,13 @@ enum rfb_rsa_aes_cred_subtype {
RFB_RSA_AES_CRED_SUBTYPE_ONLY_PASS = 2, RFB_RSA_AES_CRED_SUBTYPE_ONLY_PASS = 2,
}; };
// This is the same for both qemu and vmware extensions
enum rfb_led_state {
RFB_LED_STATE_SCROLL_LOCK = 1 << 0,
RFB_LED_STATE_NUM_LOCK = 1 << 1,
RFB_LED_STATE_CAPS_LOCK = 1 << 2,
};
struct rfb_security_types_msg { struct rfb_security_types_msg {
uint8_t n; uint8_t n;
uint8_t types[0]; uint8_t types[0];
@ -266,3 +275,15 @@ struct rfb_rsa_aes_challenge_msg {
uint16_t length; uint16_t length;
uint8_t challenge[0]; uint8_t challenge[0];
} RFB_PACKED; } RFB_PACKED;
struct rfb_colour_map_entry {
uint16_t r, g, b;
} RFB_PACKED;
struct rfb_set_colour_map_entries_msg {
uint8_t type;
uint8_t padding;
uint16_t first_colour;
uint16_t n_colours;
struct rfb_colour_map_entry colours[0];
} RFB_PACKED;

View File

@ -1,7 +1,7 @@
project( project(
'neatvnc', 'neatvnc',
'c', 'c',
version: '0.6.0', version: '0.9-dev',
license: 'ISC', license: 'ISC',
default_options: [ default_options: [
'c_std=gnu11', 'c_std=gnu11',
@ -13,7 +13,6 @@ buildtype = get_option('buildtype')
host_system = host_machine.system() host_system = host_machine.system()
c_args = [ c_args = [
'-DPROJECT_VERSION="@0@"'.format(meson.project_version()),
'-D_GNU_SOURCE', '-D_GNU_SOURCE',
'-fvisibility=hidden', '-fvisibility=hidden',
'-DAML_UNSTABLE_API=1', '-DAML_UNSTABLE_API=1',
@ -27,17 +26,20 @@ if buildtype != 'debug' and buildtype != 'debugoptimized'
c_args += '-DNDEBUG' c_args += '-DNDEBUG'
endif endif
version = '"@0@"'.format(meson.project_version())
git = find_program('git', native: true, required: false) git = find_program('git', native: true, required: false)
if git.found() if git.found()
git_describe = run_command([git, 'describe', '--tags', '--long'], check: false) git_commit = run_command([git, 'rev-parse', '--short', 'HEAD'])
git_branch = run_command([git, 'rev-parse', '--abbrev-ref', 'HEAD'], check: false) git_branch = run_command([git, 'rev-parse', '--abbrev-ref', 'HEAD'])
if git_describe.returncode() == 0 and git_branch.returncode() == 0 if git_commit.returncode() == 0 and git_branch.returncode() == 0
c_args += '-DGIT_VERSION="@0@ (@1@)"'.format( version = '"v@0@-@1@ (@2@)"'.format(
git_describe.stdout().strip(), meson.project_version(),
git_commit.stdout().strip(),
git_branch.stdout().strip(), git_branch.stdout().strip(),
) )
endif endif
endif endif
add_project_arguments('-DPROJECT_VERSION=@0@'.format(version), language: 'c')
libdrm_inc = dependency('libdrm').partial_dependency(compile_args: true) libdrm_inc = dependency('libdrm').partial_dependency(compile_args: true)
@ -136,13 +138,26 @@ if gbm.found()
config.set('HAVE_GBM', true) config.set('HAVE_GBM', true)
endif endif
if gbm.found() and libdrm.found() and libavcodec.found() and libavfilter.found() and libavutil.found() have_ffmpeg = gbm.found() and libdrm.found() and libavcodec.found() and libavfilter.found() and libavutil.found()
sources += [ 'src/h264-encoder.c', 'src/open-h264.c' ] have_v4l2 = gbm.found() and libdrm.found() and cc.check_header('linux/videodev2.h')
if have_ffmpeg
sources += [ 'src/h264-encoder-ffmpeg-impl.c' ]
dependencies += [libdrm, libavcodec, libavfilter, libavutil] dependencies += [libdrm, libavcodec, libavfilter, libavutil]
config.set('ENABLE_OPEN_H264', true) config.set('HAVE_FFMPEG', true)
config.set('HAVE_LIBAVUTIL', true) config.set('HAVE_LIBAVUTIL', true)
endif endif
if have_v4l2
sources += [ 'src/h264-encoder-v4l2m2m-impl.c' ]
config.set('HAVE_V4L2', true)
endif
if have_ffmpeg or have_v4l2
sources += [ 'src/h264-encoder.c', 'src/open-h264.c' ]
config.set('ENABLE_OPEN_H264', true)
endif
if enable_websocket if enable_websocket
sources += [ sources += [
'src/ws-handshake.c', 'src/ws-handshake.c',
@ -153,6 +168,13 @@ if enable_websocket
config.set('ENABLE_WEBSOCKET', true) config.set('ENABLE_WEBSOCKET', true)
endif endif
if get_option('experimental')
if buildtype == 'release'
warning('Experimental features enabled in release build')
endif
config.set('ENABLE_EXPERIMENTAL', true)
endif
configure_file( configure_file(
output: 'config.h', output: 'config.h',
configuration: config, configuration: config,

View File

@ -7,3 +7,4 @@ option('nettle', type: 'feature', value: 'auto', description: 'Enable nettle low
option('systemtap', type: 'boolean', value: false, description: 'Enable tracing using sdt') option('systemtap', type: 'boolean', value: false, description: 'Enable tracing using sdt')
option('gbm', type: 'feature', value: 'auto', description: 'Enable GBM integration') option('gbm', type: 'feature', value: 'auto', description: 'Enable GBM integration')
option('h264', type: 'feature', value: 'auto', description: 'Enable open h264 encoding') option('h264', type: 'feature', value: 'auto', description: 'Enable open h264 encoding')
option('experimental', type: 'boolean', value: false, description: 'Enable experimental features')

View File

@ -318,7 +318,7 @@ static struct crypto_cipher* crypto_cipher_new_aes128_ecb(
aes128_set_encrypt_key(&self->enc_ctx.aes128_ecb, enc_key); aes128_set_encrypt_key(&self->enc_ctx.aes128_ecb, enc_key);
if (dec_key) if (dec_key)
aes128_set_decrypt_key(&self->enc_ctx.aes128_ecb, dec_key); aes128_set_decrypt_key(&self->dec_ctx.aes128_ecb, dec_key);
self->encrypt = crypto_cipher_aes128_ecb_encrypt; self->encrypt = crypto_cipher_aes128_ecb_encrypt;
self->decrypt = crypto_cipher_aes128_ecb_decrypt; self->decrypt = crypto_cipher_aes128_ecb_decrypt;

View File

@ -0,0 +1,627 @@
/*
* Copyright (c) 2021 - 2024 Andri Yngvason
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
* REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
* INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
* LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
* OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
* PERFORMANCE OF THIS SOFTWARE.
*/
#include "h264-encoder.h"
#include "neatvnc.h"
#include "fb.h"
#include "sys/queue.h"
#include "vec.h"
#include "usdt.h"
#include <stdlib.h>
#include <stdint.h>
#include <stdbool.h>
#include <unistd.h>
#include <assert.h>
#include <gbm.h>
#include <xf86drm.h>
#include <aml.h>
#include <libavcodec/avcodec.h>
#include <libavutil/hwcontext.h>
#include <libavutil/hwcontext_drm.h>
#include <libavutil/pixdesc.h>
#include <libavutil/dict.h>
#include <libavfilter/avfilter.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libdrm/drm_fourcc.h>
struct h264_encoder;
struct fb_queue_entry {
struct nvnc_fb* fb;
TAILQ_ENTRY(fb_queue_entry) link;
};
TAILQ_HEAD(fb_queue, fb_queue_entry);
struct h264_encoder_ffmpeg {
struct h264_encoder base;
uint32_t width;
uint32_t height;
uint32_t format;
AVRational timebase;
AVRational sample_aspect_ratio;
enum AVPixelFormat av_pixel_format;
/* type: AVHWDeviceContext */
AVBufferRef* hw_device_ctx;
/* type: AVHWFramesContext */
AVBufferRef* hw_frames_ctx;
AVCodecContext* codec_ctx;
AVFilterGraph* filter_graph;
AVFilterContext* filter_in;
AVFilterContext* filter_out;
struct fb_queue fb_queue;
struct aml_work* work;
struct nvnc_fb* current_fb;
struct vec current_packet;
bool current_frame_is_keyframe;
bool please_destroy;
};
struct h264_encoder_impl h264_encoder_ffmpeg_impl;
static enum AVPixelFormat drm_to_av_pixel_format(uint32_t format)
{
switch (format) {
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
return AV_PIX_FMT_BGR0;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
return AV_PIX_FMT_RGB0;
case DRM_FORMAT_RGBX8888:
case DRM_FORMAT_RGBA8888:
return AV_PIX_FMT_0BGR;
case DRM_FORMAT_BGRX8888:
case DRM_FORMAT_BGRA8888:
return AV_PIX_FMT_0RGB;
}
return AV_PIX_FMT_NONE;
}
static void hw_frame_desc_free(void* opaque, uint8_t* data)
{
struct AVDRMFrameDescriptor* desc = (void*)data;
assert(desc);
for (int i = 0; i < desc->nb_objects; ++i)
close(desc->objects[i].fd);
free(desc);
}
// TODO: Maybe do this once per frame inside nvnc_fb?
static AVFrame* fb_to_avframe(struct nvnc_fb* fb)
{
struct gbm_bo* bo = fb->bo;
int n_planes = gbm_bo_get_plane_count(bo);
AVDRMFrameDescriptor* desc = calloc(1, sizeof(*desc));
desc->nb_objects = n_planes;
desc->nb_layers = 1;
desc->layers[0].format = gbm_bo_get_format(bo);
desc->layers[0].nb_planes = n_planes;
for (int i = 0; i < n_planes; ++i) {
uint32_t stride = gbm_bo_get_stride_for_plane(bo, i);
desc->objects[i].fd = gbm_bo_get_fd_for_plane(bo, i);
desc->objects[i].size = stride * fb->height;
desc->objects[i].format_modifier = gbm_bo_get_modifier(bo);
desc->layers[0].format = gbm_bo_get_format(bo);
desc->layers[0].planes[i].object_index = i;
desc->layers[0].planes[i].offset = gbm_bo_get_offset(bo, i);
desc->layers[0].planes[i].pitch = stride;
}
AVFrame* frame = av_frame_alloc();
if (!frame) {
hw_frame_desc_free(NULL, (void*)desc);
return NULL;
}
frame->opaque = fb;
frame->width = fb->width;
frame->height = fb->height;
frame->format = AV_PIX_FMT_DRM_PRIME;
frame->sample_aspect_ratio = (AVRational){1, 1};
AVBufferRef* desc_ref = av_buffer_create((void*)desc, sizeof(*desc),
hw_frame_desc_free, NULL, 0);
if (!desc_ref) {
hw_frame_desc_free(NULL, (void*)desc);
av_frame_free(&frame);
return NULL;
}
frame->buf[0] = desc_ref;
frame->data[0] = (void*)desc_ref->data;
// TODO: Set colorspace?
return frame;
}
static struct nvnc_fb* fb_queue_dequeue(struct fb_queue* queue)
{
if (TAILQ_EMPTY(queue))
return NULL;
struct fb_queue_entry* entry = TAILQ_FIRST(queue);
TAILQ_REMOVE(queue, entry, link);
struct nvnc_fb* fb = entry->fb;
free(entry);
return fb;
}
static int fb_queue_enqueue(struct fb_queue* queue, struct nvnc_fb* fb)
{
struct fb_queue_entry* entry = calloc(1, sizeof(*entry));
if (!entry)
return -1;
entry->fb = fb;
nvnc_fb_ref(fb);
TAILQ_INSERT_TAIL(queue, entry, link);
return 0;
}
static int h264_encoder__init_buffersrc(struct h264_encoder_ffmpeg* self)
{
int rc;
/* Placeholder values are used to pacify input checking and the real
* values are set below.
*/
rc = avfilter_graph_create_filter(&self->filter_in,
avfilter_get_by_name("buffer"), "in",
"width=1:height=1:pix_fmt=drm_prime:time_base=1/1", NULL,
self->filter_graph);
if (rc != 0)
return -1;
AVBufferSrcParameters *params = av_buffersrc_parameters_alloc();
if (!params)
return -1;
params->format = AV_PIX_FMT_DRM_PRIME;
params->width = self->width;
params->height = self->height;
params->sample_aspect_ratio = self->sample_aspect_ratio;
params->time_base = self->timebase;
params->hw_frames_ctx = self->hw_frames_ctx;
rc = av_buffersrc_parameters_set(self->filter_in, params);
assert(rc == 0);
av_free(params);
return 0;
}
static int h264_encoder__init_filters(struct h264_encoder_ffmpeg* self)
{
int rc;
self->filter_graph = avfilter_graph_alloc();
if (!self->filter_graph)
return -1;
rc = h264_encoder__init_buffersrc(self);
if (rc != 0)
goto failure;
rc = avfilter_graph_create_filter(&self->filter_out,
avfilter_get_by_name("buffersink"), "out", NULL,
NULL, self->filter_graph);
if (rc != 0)
goto failure;
AVFilterInOut* inputs = avfilter_inout_alloc();
if (!inputs)
goto failure;
inputs->name = av_strdup("in");
inputs->filter_ctx = self->filter_in;
inputs->pad_idx = 0;
inputs->next = NULL;
AVFilterInOut* outputs = avfilter_inout_alloc();
if (!outputs) {
avfilter_inout_free(&inputs);
goto failure;
}
outputs->name = av_strdup("out");
outputs->filter_ctx = self->filter_out;
outputs->pad_idx = 0;
outputs->next = NULL;
rc = avfilter_graph_parse(self->filter_graph,
"hwmap=mode=direct:derive_device=vaapi"
",scale_vaapi=format=nv12:mode=fast",
outputs, inputs, NULL);
if (rc != 0)
goto failure;
assert(self->hw_device_ctx);
for (unsigned int i = 0; i < self->filter_graph->nb_filters; ++i) {
self->filter_graph->filters[i]->hw_device_ctx =
av_buffer_ref(self->hw_device_ctx);
}
rc = avfilter_graph_config(self->filter_graph, NULL);
if (rc != 0)
goto failure;
return 0;
failure:
avfilter_graph_free(&self->filter_graph);
return -1;
}
static int h264_encoder__init_codec_context(struct h264_encoder_ffmpeg* self,
const AVCodec* codec, int quality)
{
self->codec_ctx = avcodec_alloc_context3(codec);
if (!self->codec_ctx)
return -1;
struct AVCodecContext* c = self->codec_ctx;
c->width = self->width;
c->height = self->height;
c->time_base = self->timebase;
c->sample_aspect_ratio = self->sample_aspect_ratio;
c->pix_fmt = AV_PIX_FMT_VAAPI;
c->gop_size = INT32_MAX; /* We'll select key frames manually */
c->max_b_frames = 0; /* B-frames are bad for latency */
c->global_quality = quality;
/* open-h264 requires baseline profile, so we use constrained
* baseline: AV_PROFILE_H264_BASELINE.
* But that is not supported by many clients. So we use a "DEFAULT" profile.
*
*/
c->profile = AV_PROFILE_H264_MAIN;
return 0;
}
static int h264_encoder__init_hw_frames_context(struct h264_encoder_ffmpeg* self)
{
self->hw_frames_ctx = av_hwframe_ctx_alloc(self->hw_device_ctx);
if (!self->hw_frames_ctx)
return -1;
AVHWFramesContext* c = (AVHWFramesContext*)self->hw_frames_ctx->data;
c->format = AV_PIX_FMT_DRM_PRIME;
c->sw_format = drm_to_av_pixel_format(self->format);
c->width = self->width;
c->height = self->height;
if (av_hwframe_ctx_init(self->hw_frames_ctx) < 0)
av_buffer_unref(&self->hw_frames_ctx);
return 0;
}
static int h264_encoder__schedule_work(struct h264_encoder_ffmpeg* self)
{
if (self->current_fb)
return 0;
self->current_fb = fb_queue_dequeue(&self->fb_queue);
if (!self->current_fb)
return 0;
DTRACE_PROBE1(neatvnc, h264_encode_frame_begin, self->current_fb->pts);
self->current_frame_is_keyframe = self->base.next_frame_should_be_keyframe;
self->base.next_frame_should_be_keyframe = false;
return aml_start(aml_get_default(), self->work);
}
static int h264_encoder__encode(struct h264_encoder_ffmpeg* self,
AVFrame* frame_in)
{
int rc;
rc = av_buffersrc_add_frame_flags(self->filter_in, frame_in,
AV_BUFFERSRC_FLAG_KEEP_REF);
if (rc != 0)
return -1;
AVFrame* filtered_frame = av_frame_alloc();
if (!filtered_frame)
return -1;
rc = av_buffersink_get_frame(self->filter_out, filtered_frame);
if (rc != 0)
goto get_frame_failure;
rc = avcodec_send_frame(self->codec_ctx, filtered_frame);
if (rc != 0)
goto send_frame_failure;
AVPacket* packet = av_packet_alloc();
assert(packet); // TODO
while (1) {
rc = avcodec_receive_packet(self->codec_ctx, packet);
if (rc != 0)
break;
vec_append(&self->current_packet, packet->data, packet->size);
packet->stream_index = 0;
av_packet_unref(packet);
}
// Frame should always start with a zero:
assert(self->current_packet.len == 0 ||
((char*)self->current_packet.data)[0] == 0);
av_packet_free(&packet);
send_frame_failure:
av_frame_unref(filtered_frame);
get_frame_failure:
av_frame_free(&filtered_frame);
return rc == AVERROR(EAGAIN) ? 0 : rc;
}
static void h264_encoder__do_work(void* handle)
{
struct h264_encoder_ffmpeg* self = aml_get_userdata(handle);
AVFrame* frame = fb_to_avframe(self->current_fb);
assert(frame); // TODO
frame->hw_frames_ctx = av_buffer_ref(self->hw_frames_ctx);
if (self->current_frame_is_keyframe) {
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(58, 7, 100)
frame->flags |= AV_FRAME_FLAG_KEY;
#else
frame->key_frame = 1;
#endif
frame->pict_type = AV_PICTURE_TYPE_I;
} else {
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(58, 7, 100)
frame->flags &= ~AV_FRAME_FLAG_KEY;
#else
frame->key_frame = 0;
#endif
frame->pict_type = AV_PICTURE_TYPE_P;
}
int rc = h264_encoder__encode(self, frame);
if (rc != 0) {
char err[256];
av_strerror(rc, err, sizeof(err));
nvnc_log(NVNC_LOG_ERROR, "Failed to encode packet: %s", err);
goto failure;
}
failure:
av_frame_unref(frame);
av_frame_free(&frame);
}
static void h264_encoder__on_work_done(void* handle)
{
struct h264_encoder_ffmpeg* self = aml_get_userdata(handle);
uint64_t pts = nvnc_fb_get_pts(self->current_fb);
nvnc_fb_release(self->current_fb);
nvnc_fb_unref(self->current_fb);
self->current_fb = NULL;
DTRACE_PROBE1(neatvnc, h264_encode_frame_end, pts);
if (self->please_destroy) {
vec_destroy(&self->current_packet);
h264_encoder_destroy(&self->base);
return;
}
if (self->current_packet.len == 0) {
nvnc_log(NVNC_LOG_WARNING, "Whoops, encoded packet length is 0");
return;
}
void* userdata = self->base.userdata;
// Must make a copy of packet because the callback might destroy the
// encoder object.
struct vec packet;
vec_init(&packet, self->current_packet.len);
vec_append(&packet, self->current_packet.data,
self->current_packet.len);
vec_clear(&self->current_packet);
h264_encoder__schedule_work(self);
self->base.on_packet_ready(packet.data, packet.len, pts, userdata);
vec_destroy(&packet);
}
static int find_render_node(char *node, size_t maxlen) {
bool r = -1;
drmDevice *devices[64];
int n = drmGetDevices2(0, devices, sizeof(devices) / sizeof(devices[0]));
for (int i = 0; i < n; ++i) {
drmDevice *dev = devices[i];
if (!(dev->available_nodes & (1 << DRM_NODE_RENDER)))
continue;
strncpy(node, dev->nodes[DRM_NODE_RENDER], maxlen);
node[maxlen - 1] = '\0';
r = 0;
break;
}
drmFreeDevices(devices, n);
return r;
}
static struct h264_encoder* h264_encoder_ffmpeg_create(uint32_t width,
uint32_t height, uint32_t format, int quality)
{
int rc;
struct h264_encoder_ffmpeg* self = calloc(1, sizeof(*self));
if (!self)
return NULL;
self->base.impl = &h264_encoder_ffmpeg_impl;
if (vec_init(&self->current_packet, 65536) < 0)
goto packet_failure;
self->work = aml_work_new(h264_encoder__do_work,
h264_encoder__on_work_done, self, NULL);
if (!self->work)
goto worker_failure;
char render_node[64];
if (find_render_node(render_node, sizeof(render_node)) < 0)
goto render_node_failure;
rc = av_hwdevice_ctx_create(&self->hw_device_ctx,
AV_HWDEVICE_TYPE_DRM, render_node, NULL, 0);
if (rc != 0)
goto hwdevice_ctx_failure;
self->base.next_frame_should_be_keyframe = true;
TAILQ_INIT(&self->fb_queue);
self->width = width;
self->height = height;
self->format = format;
self->timebase = (AVRational){1, 1000000};
self->sample_aspect_ratio = (AVRational){1, 1};
self->av_pixel_format = drm_to_av_pixel_format(format);
if (self->av_pixel_format == AV_PIX_FMT_NONE)
goto pix_fmt_failure;
const AVCodec* codec = avcodec_find_encoder_by_name("h264_vaapi");
if (!codec)
goto codec_failure;
if (h264_encoder__init_hw_frames_context(self) < 0)
goto hw_frames_context_failure;
if (h264_encoder__init_filters(self) < 0)
goto filter_failure;
if (h264_encoder__init_codec_context(self, codec, quality) < 0)
goto codec_context_failure;
self->codec_ctx->hw_frames_ctx =
av_buffer_ref(self->filter_out->inputs[0]->hw_frames_ctx);
AVDictionary *opts = NULL;
av_dict_set_int(&opts, "async_depth", 1, 0);
rc = avcodec_open2(self->codec_ctx, codec, &opts);
av_dict_free(&opts);
if (rc != 0)
goto avcodec_open_failure;
return &self->base;
avcodec_open_failure:
avcodec_free_context(&self->codec_ctx);
codec_context_failure:
filter_failure:
av_buffer_unref(&self->hw_frames_ctx);
hw_frames_context_failure:
codec_failure:
pix_fmt_failure:
av_buffer_unref(&self->hw_device_ctx);
hwdevice_ctx_failure:
render_node_failure:
aml_unref(self->work);
worker_failure:
vec_destroy(&self->current_packet);
packet_failure:
free(self);
return NULL;
}
static void h264_encoder_ffmpeg_destroy(struct h264_encoder* base)
{
struct h264_encoder_ffmpeg* self = (struct h264_encoder_ffmpeg*)base;
if (self->current_fb) {
self->please_destroy = true;
return;
}
vec_destroy(&self->current_packet);
av_buffer_unref(&self->hw_frames_ctx);
avcodec_free_context(&self->codec_ctx);
av_buffer_unref(&self->hw_device_ctx);
avfilter_graph_free(&self->filter_graph);
aml_unref(self->work);
free(self);
}
static void h264_encoder_ffmpeg_feed(struct h264_encoder* base,
struct nvnc_fb* fb)
{
struct h264_encoder_ffmpeg* self = (struct h264_encoder_ffmpeg*)base;
assert(fb->type == NVNC_FB_GBM_BO);
// TODO: Add transform filter
assert(fb->transform == NVNC_TRANSFORM_NORMAL);
int rc = fb_queue_enqueue(&self->fb_queue, fb);
assert(rc == 0); // TODO
nvnc_fb_hold(fb);
rc = h264_encoder__schedule_work(self);
assert(rc == 0); // TODO
}
struct h264_encoder_impl h264_encoder_ffmpeg_impl = {
.create = h264_encoder_ffmpeg_create,
.destroy = h264_encoder_ffmpeg_destroy,
.feed = h264_encoder_ffmpeg_feed,
};

View File

@ -0,0 +1,741 @@
/*
* Copyright (c) 2024 Andri Yngvason
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
* REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
* INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
* LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
* OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
* PERFORMANCE OF THIS SOFTWARE.
*/
#include "h264-encoder.h"
#include "neatvnc.h"
#include "fb.h"
#include "pixels.h"
#include <assert.h>
#include <string.h>
#include <stdio.h>
#include <inttypes.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/ioctl.h>
#include <linux/videodev2.h>
#include <drm_fourcc.h>
#include <gbm.h>
#include <aml.h>
#include <dirent.h>
#define UDIV_UP(a, b) (((a) + (b) - 1) / (b))
#define ALIGN_UP(a, b) ((b) * UDIV_UP((a), (b)))
#define ARRAY_LENGTH(a) (sizeof(a) / sizeof((a)[0]))
#define N_SRC_BUFS 3
#define N_DST_BUFS 3
struct h264_encoder_v4l2m2m_dst_buf {
struct v4l2_buffer buffer;
struct v4l2_plane plane;
void* payload;
};
struct h264_encoder_v4l2m2m_src_buf {
struct v4l2_buffer buffer;
struct v4l2_plane planes[4];
int fd;
bool is_taken;
struct nvnc_fb* fb;
};
struct h264_encoder_v4l2m2m {
struct h264_encoder base;
uint32_t width;
uint32_t height;
uint32_t format;
int quality; // TODO: Can we affect the quality?
char driver[16];
int fd;
struct aml_handler* handler;
struct h264_encoder_v4l2m2m_src_buf src_bufs[N_SRC_BUFS];
int src_buf_index;
struct h264_encoder_v4l2m2m_dst_buf dst_bufs[N_DST_BUFS];
};
struct h264_encoder_impl h264_encoder_v4l2m2m_impl;
static int v4l2_qbuf(int fd, const struct v4l2_buffer* inbuf)
{
assert(inbuf->length <= 4);
struct v4l2_plane planes[4];
struct v4l2_buffer outbuf;
outbuf = *inbuf;
memcpy(&planes, inbuf->m.planes, inbuf->length * sizeof(planes[0]));
outbuf.m.planes = planes;
return ioctl(fd, VIDIOC_QBUF, &outbuf);
}
static inline int v4l2_dqbuf(int fd, struct v4l2_buffer* buf)
{
return ioctl(fd, VIDIOC_DQBUF, buf);
}
static struct h264_encoder_v4l2m2m_src_buf* take_src_buffer(
struct h264_encoder_v4l2m2m* self)
{
unsigned int count = 0;
int i = self->src_buf_index;
struct h264_encoder_v4l2m2m_src_buf* buffer;
do {
buffer = &self->src_bufs[i++];
i %= ARRAY_LENGTH(self->src_bufs);
} while (++count < ARRAY_LENGTH(self->src_bufs) && buffer->is_taken);
if (buffer->is_taken)
return NULL;
self->src_buf_index = i;
buffer->is_taken = true;
return buffer;
}
static bool any_src_buf_is_taken(struct h264_encoder_v4l2m2m* self)
{
bool result = false;
for (unsigned int i = 0; i < ARRAY_LENGTH(self->src_bufs); ++i)
if (self->src_bufs[i].is_taken)
result = true;
return result;
}
static int u32_cmp(const void* pa, const void* pb)
{
const uint32_t *a = pa;
const uint32_t *b = pb;
return *a < *b ? -1 : *a > *b;
}
static size_t get_supported_formats(struct h264_encoder_v4l2m2m* self,
uint32_t* formats, size_t max_len)
{
size_t i = 0;
for (;; ++i) {
struct v4l2_fmtdesc desc = {
.index = i,
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
};
int rc = ioctl(self->fd, VIDIOC_ENUM_FMT, &desc);
if (rc < 0)
break;
nvnc_trace("Got pixel format: %s", desc.description);
formats[i] = desc.pixelformat;
}
qsort(formats, i, sizeof(*formats), u32_cmp);
return i;
}
static bool have_v4l2_format(const uint32_t* formats, size_t n_formats,
uint32_t format)
{
return bsearch(&format, formats, n_formats, sizeof(format), u32_cmp);
}
static uint32_t v4l2_format_from_drm(const uint32_t* formats,
size_t n_formats, uint32_t drm_format)
{
#define TRY_FORMAT(f) \
if (have_v4l2_format(formats, n_formats, f)) \
return f
switch (drm_format) {
case DRM_FORMAT_RGBX8888:
case DRM_FORMAT_RGBA8888:
TRY_FORMAT(V4L2_PIX_FMT_RGBX32);
TRY_FORMAT(V4L2_PIX_FMT_RGBA32);
break;
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
TRY_FORMAT(V4L2_PIX_FMT_XRGB32);
TRY_FORMAT(V4L2_PIX_FMT_ARGB32);
TRY_FORMAT(V4L2_PIX_FMT_RGB32);
break;
case DRM_FORMAT_BGRX8888:
case DRM_FORMAT_BGRA8888:
TRY_FORMAT(V4L2_PIX_FMT_XBGR32);
TRY_FORMAT(V4L2_PIX_FMT_ABGR32);
TRY_FORMAT(V4L2_PIX_FMT_BGR32);
break;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
TRY_FORMAT(V4L2_PIX_FMT_BGRX32);
TRY_FORMAT(V4L2_PIX_FMT_BGRA32);
break;
// TODO: More formats
}
return 0;
#undef TRY_FORMAT
}
// This driver mixes up pixel formats...
static uint32_t v4l2_format_from_drm_bcm2835(const uint32_t* formats,
size_t n_formats, uint32_t drm_format)
{
switch (drm_format) {
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
return V4L2_PIX_FMT_RGBA32;
case DRM_FORMAT_BGRX8888:
case DRM_FORMAT_BGRA8888:
// TODO: This could also be ABGR, based on how this driver
// behaves
return V4L2_PIX_FMT_BGR32;
}
return 0;
}
static int set_src_fmt(struct h264_encoder_v4l2m2m* self)
{
int rc;
uint32_t supported_formats[256];
size_t n_formats = get_supported_formats(self, supported_formats,
ARRAY_LENGTH(supported_formats));
uint32_t format;
if (strcmp(self->driver, "bcm2835-codec") == 0)
format = v4l2_format_from_drm_bcm2835(supported_formats,
n_formats, self->format);
else
format = v4l2_format_from_drm(supported_formats, n_formats,
self->format);
if (!format) {
nvnc_log(NVNC_LOG_DEBUG, "Failed to find a proper pixel format");
return -1;
}
struct v4l2_format fmt = {
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
};
rc = ioctl(self->fd, VIDIOC_G_FMT, &fmt);
if (rc < 0) {
return -1;
}
struct v4l2_pix_format_mplane* pix_fmt = &fmt.fmt.pix_mp;
pix_fmt->pixelformat = format;
pix_fmt->width = ALIGN_UP(self->width, 16);
pix_fmt->height = ALIGN_UP(self->height, 16);
rc = ioctl(self->fd, VIDIOC_S_FMT, &fmt);
if (rc < 0) {
return -1;
}
return 0;
}
static int set_dst_fmt(struct h264_encoder_v4l2m2m* self)
{
int rc;
struct v4l2_format fmt = {
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
};
rc = ioctl(self->fd, VIDIOC_G_FMT, &fmt);
if (rc < 0) {
return -1;
}
struct v4l2_pix_format_mplane* pix_fmt = &fmt.fmt.pix_mp;
pix_fmt->pixelformat = V4L2_PIX_FMT_H264;
pix_fmt->width = self->width;
pix_fmt->height = self->height;
rc = ioctl(self->fd, VIDIOC_S_FMT, &fmt);
if (rc < 0) {
return -1;
}
return 0;
}
static int alloc_dst_buffers(struct h264_encoder_v4l2m2m* self)
{
int n_bufs = ARRAY_LENGTH(self->dst_bufs);
int rc;
struct v4l2_requestbuffers req = {
.memory = V4L2_MEMORY_MMAP,
.count = n_bufs,
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
};
rc = ioctl(self->fd, VIDIOC_REQBUFS, &req);
if (rc < 0)
return -1;
for (unsigned int i = 0; i < req.count; ++i) {
struct h264_encoder_v4l2m2m_dst_buf* buffer = &self->dst_bufs[i];
struct v4l2_buffer* buf = &buffer->buffer;
buf->index = i;
buf->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
buf->memory = V4L2_MEMORY_MMAP;
buf->length = 1;
buf->m.planes = &buffer->plane;
rc = ioctl(self->fd, VIDIOC_QUERYBUF, buf);
if (rc < 0)
return -1;
buffer->payload = mmap(0, buffer->plane.length,
PROT_READ | PROT_WRITE, MAP_SHARED, self->fd,
buffer->plane.m.mem_offset);
if (buffer->payload == MAP_FAILED) {
nvnc_log(NVNC_LOG_ERROR, "Whoops, mapping failed: %m");
return -1;
}
}
return 0;
}
static void enqueue_dst_buffers(struct h264_encoder_v4l2m2m* self)
{
for (unsigned int i = 0; i < ARRAY_LENGTH(self->dst_bufs); ++i) {
int rc = v4l2_qbuf(self->fd, &self->dst_bufs[i].buffer);
assert(rc >= 0);
}
}
static void process_dst_bufs(struct h264_encoder_v4l2m2m* self)
{
int rc;
struct v4l2_plane plane = { 0 };
struct v4l2_buffer buf = {
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
.memory = V4L2_MEMORY_MMAP,
.length = 1,
.m.planes = &plane,
};
while (true) {
rc = v4l2_dqbuf(self->fd, &buf);
if (rc < 0)
break;
uint64_t pts = buf.timestamp.tv_sec * UINT64_C(1000000) +
buf.timestamp.tv_usec;
struct h264_encoder_v4l2m2m_dst_buf* dstbuf =
&self->dst_bufs[buf.index];
size_t size = buf.m.planes[0].bytesused;
static uint64_t last_pts;
if (last_pts && last_pts > pts) {
nvnc_log(NVNC_LOG_ERROR, "pts - last_pts = %"PRIi64,
(int64_t)pts - (int64_t)last_pts);
}
last_pts = pts;
nvnc_trace("Encoded frame (index %d) at %"PRIu64" µs with size: %zu",
buf.index, pts, size);
self->base.on_packet_ready(dstbuf->payload, size, pts,
self->base.userdata);
v4l2_qbuf(self->fd, &buf);
}
}
static void process_src_bufs(struct h264_encoder_v4l2m2m* self)
{
int rc;
struct v4l2_plane planes[4] = { 0 };
struct v4l2_buffer buf = {
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
.memory = V4L2_MEMORY_DMABUF,
.length = 1,
.m.planes = planes,
};
while (true) {
rc = v4l2_dqbuf(self->fd, &buf);
if (rc < 0)
break;
struct h264_encoder_v4l2m2m_src_buf* srcbuf =
&self->src_bufs[buf.index];
srcbuf->is_taken = false;
// TODO: This assumes that there's only one fd
close(srcbuf->planes[0].m.fd);
nvnc_fb_unmap(srcbuf->fb);
nvnc_fb_release(srcbuf->fb);
nvnc_fb_unref(srcbuf->fb);
srcbuf->fb = NULL;
}
}
static void stream_off(struct h264_encoder_v4l2m2m* self)
{
int type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
ioctl(self->fd, VIDIOC_STREAMOFF, &type);
type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
ioctl(self->fd, VIDIOC_STREAMOFF, &type);
}
static void free_dst_buffers(struct h264_encoder_v4l2m2m* self)
{
for (unsigned int i = 0; i < ARRAY_LENGTH(self->dst_bufs); ++i) {
struct h264_encoder_v4l2m2m_dst_buf* buf = &self->dst_bufs[i];
munmap(buf->payload, buf->plane.length);
}
}
static int stream_on(struct h264_encoder_v4l2m2m* self)
{
int type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
ioctl(self->fd, VIDIOC_STREAMON, &type);
type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
return ioctl(self->fd, VIDIOC_STREAMON, &type);
}
static int alloc_src_buffers(struct h264_encoder_v4l2m2m* self)
{
int rc;
struct v4l2_requestbuffers req = {
.memory = V4L2_MEMORY_DMABUF,
.count = N_SRC_BUFS,
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
};
rc = ioctl(self->fd, VIDIOC_REQBUFS, &req);
if (rc < 0)
return -1;
for (int i = 0; i < N_SRC_BUFS; ++i) {
struct h264_encoder_v4l2m2m_src_buf* buffer = &self->src_bufs[i];
struct v4l2_buffer* buf = &buffer->buffer;
buf->index = i;
buf->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
buf->memory = V4L2_MEMORY_DMABUF;
buf->length = 1;
buf->m.planes = buffer->planes;
rc = ioctl(self->fd, VIDIOC_QUERYBUF, buf);
if (rc < 0)
return -1;
}
return 0;
}
static void force_key_frame(struct h264_encoder_v4l2m2m* self)
{
struct v4l2_control ctrl = { 0 };
ctrl.id = V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME;
ctrl.value = 0;
ioctl(self->fd, VIDIOC_S_CTRL, &ctrl);
}
static void encode_buffer(struct h264_encoder_v4l2m2m* self,
struct nvnc_fb* fb)
{
struct h264_encoder_v4l2m2m_src_buf* srcbuf = take_src_buffer(self);
if (!srcbuf) {
nvnc_log(NVNC_LOG_ERROR, "Out of source buffers. Dropping frame...");
return;
}
assert(!srcbuf->fb);
nvnc_fb_ref(fb);
nvnc_fb_hold(fb);
/* For some reason the v4l2m2m h264 encoder in the Rapberry Pi 4 gets
* really glitchy unless the buffer is mapped first.
* This should probably be handled by the driver, but it's not.
*/
nvnc_fb_map(fb);
srcbuf->fb = fb;
struct gbm_bo* bo = nvnc_fb_get_gbm_bo(fb);
int n_planes = gbm_bo_get_plane_count(bo);
int fd = gbm_bo_get_fd(bo);
uint32_t height = ALIGN_UP(gbm_bo_get_height(bo), 16);
for (int i = 0; i < n_planes; ++i) {
uint32_t stride = gbm_bo_get_stride_for_plane(bo, i);
uint32_t offset = gbm_bo_get_offset(bo, i);
uint32_t size = stride * height;
srcbuf->buffer.m.planes[i].m.fd = fd;
srcbuf->buffer.m.planes[i].bytesused = size;
srcbuf->buffer.m.planes[i].length = size;
srcbuf->buffer.m.planes[i].data_offset = offset;
}
srcbuf->buffer.timestamp.tv_sec = fb->pts / UINT64_C(1000000);
srcbuf->buffer.timestamp.tv_usec = fb->pts % UINT64_C(1000000);
if (self->base.next_frame_should_be_keyframe)
force_key_frame(self);
self->base.next_frame_should_be_keyframe = false;
int rc = v4l2_qbuf(self->fd, &srcbuf->buffer);
if (rc < 0) {
nvnc_log(NVNC_LOG_PANIC, "Failed to enqueue buffer: %m");
}
}
static void process_fd_events(void* handle)
{
struct h264_encoder_v4l2m2m* self = aml_get_userdata(handle);
process_dst_bufs(self);
}
static void h264_encoder_v4l2m2m_configure(struct h264_encoder_v4l2m2m* self)
{
struct v4l2_control ctrl = { 0 };
ctrl.id = V4L2_CID_MPEG_VIDEO_H264_PROFILE;
ctrl.value = V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE;
ioctl(self->fd, VIDIOC_S_CTRL, &ctrl);
ctrl.id = V4L2_CID_MPEG_VIDEO_H264_I_PERIOD;
ctrl.value = INT_MAX;
ioctl(self->fd, VIDIOC_S_CTRL, &ctrl);
ctrl.id = V4L2_CID_MPEG_VIDEO_BITRATE_MODE;
ctrl.value = V4L2_MPEG_VIDEO_BITRATE_MODE_CQ;
ioctl(self->fd, VIDIOC_S_CTRL, &ctrl);
ctrl.id = V4L2_CID_MPEG_VIDEO_CONSTANT_QUALITY;
ctrl.value = self->quality;
ioctl(self->fd, VIDIOC_S_CTRL, &ctrl);
}
static bool can_encode_to_h264(int fd)
{
size_t i = 0;
for (;; ++i) {
struct v4l2_fmtdesc desc = {
.index = i,
.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
};
int rc = ioctl(fd, VIDIOC_ENUM_FMT, &desc);
if (rc < 0)
break;
if (desc.pixelformat == V4L2_PIX_FMT_H264)
return true;
}
return false;
}
static bool can_handle_frame_size(int fd, uint32_t width, uint32_t height)
{
size_t i = 0;
for (;; ++i) {
struct v4l2_frmsizeenum size = {
.index = i,
.pixel_format = V4L2_PIX_FMT_H264,
};
int rc = ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &size);
if (rc < 0)
break;
switch (size.type) {
case V4L2_FRMSIZE_TYPE_DISCRETE:
if (size.discrete.width == width &&
size.discrete.height == height)
return true;
break;
case V4L2_FRMSIZE_TYPE_CONTINUOUS:
case V4L2_FRMSIZE_TYPE_STEPWISE:
if (size.stepwise.min_width <= width &&
width <= size.stepwise.max_width &&
size.stepwise.min_height <= height &&
height <= size.stepwise.max_height &&
(16 % size.stepwise.step_width) == 0 &&
(16 % size.stepwise.step_height) == 0)
return true;
break;
}
}
return false;
}
static bool is_device_capable(int fd, uint32_t width, uint32_t height)
{
struct v4l2_capability cap = { 0 };
int rc = ioctl(fd, VIDIOC_QUERYCAP, &cap);
if (rc < 0)
return false;
uint32_t required_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING;
if ((cap.capabilities & required_caps) != required_caps)
return false;
if (!can_encode_to_h264(fd))
return false;
if (!can_handle_frame_size(fd, width, height))
return false;
return true;
}
static int find_capable_device(uint32_t width, uint32_t height)
{
int fd = -1;
DIR *dir = opendir("/dev");
assert(dir);
for (;;) {
struct dirent* entry = readdir(dir);
if (!entry)
break;
if (strncmp(entry->d_name, "video", 5) != 0)
continue;
char path[256];
snprintf(path, sizeof(path), "/dev/%s", entry->d_name);
fd = open(path, O_RDWR | O_CLOEXEC);
if (fd < 0) {
continue;
}
if (is_device_capable(fd, width, height)) {
nvnc_log(NVNC_LOG_DEBUG, "Using v4l2m2m device: %s",
path);
break;
}
close(fd);
fd = -1;
}
closedir(dir);
return fd;
}
static struct h264_encoder* h264_encoder_v4l2m2m_create(uint32_t width,
uint32_t height, uint32_t format, int quality)
{
struct h264_encoder_v4l2m2m* self = calloc(1, sizeof(*self));
if (!self)
return NULL;
self->base.impl = &h264_encoder_v4l2m2m_impl;
self->fd = -1;
self->width = width;
self->height = height;
self->format = format;
self->quality = quality;
self->fd = find_capable_device(width, height);
if (self->fd < 0)
goto failure;
struct v4l2_capability cap = { 0 };
ioctl(self->fd, VIDIOC_QUERYCAP, &cap);
strncpy(self->driver, (const char*)cap.driver, sizeof(self->driver));
if (set_src_fmt(self) < 0)
goto failure;
if (set_dst_fmt(self) < 0)
goto failure;
h264_encoder_v4l2m2m_configure(self);
if (alloc_dst_buffers(self) < 0)
goto failure;
if (alloc_src_buffers(self) < 0)
goto failure;
enqueue_dst_buffers(self);
if (stream_on(self) < 0)
goto failure;
int flags = fcntl(self->fd, F_GETFL);
fcntl(self->fd, F_SETFL, flags | O_NONBLOCK);
self->handler = aml_handler_new(self->fd, process_fd_events, self, NULL);
aml_set_event_mask(self->handler, AML_EVENT_READ);
if (aml_start(aml_get_default(), self->handler) < 0) {
aml_unref(self->handler);
goto failure;
}
return &self->base;
failure:
if (self->fd >= 0)
close(self->fd);
return NULL;
}
static void claim_all_src_bufs(
struct h264_encoder_v4l2m2m* self)
{
for (;;) {
process_src_bufs(self);
if (!any_src_buf_is_taken(self))
break;
usleep(10000);
}
}
static void h264_encoder_v4l2m2m_destroy(struct h264_encoder* base)
{
struct h264_encoder_v4l2m2m* self = (struct h264_encoder_v4l2m2m*)base;
claim_all_src_bufs(self);
aml_stop(aml_get_default(), self->handler);
aml_unref(self->handler);
stream_off(self);
free_dst_buffers(self);
if (self->fd >= 0)
close(self->fd);
free(self);
}
static void h264_encoder_v4l2m2m_feed(struct h264_encoder* base,
struct nvnc_fb* fb)
{
struct h264_encoder_v4l2m2m* self = (struct h264_encoder_v4l2m2m*)base;
process_src_bufs(self);
encode_buffer(self, fb);
}
struct h264_encoder_impl h264_encoder_v4l2m2m_impl = {
.create = h264_encoder_v4l2m2m_create,
.destroy = h264_encoder_v4l2m2m_destroy,
.feed = h264_encoder_v4l2m2m_feed,
};

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2021 - 2022 Andri Yngvason * Copyright (c) 2024 Andri Yngvason
* *
* Permission to use, copy, modify, and/or distribute this software for any * Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -15,606 +15,60 @@
*/ */
#include "h264-encoder.h" #include "h264-encoder.h"
#include "neatvnc.h" #include "config.h"
#include "fb.h"
#include "sys/queue.h"
#include "vec.h"
#include "usdt.h"
#include <stdlib.h> #ifdef HAVE_FFMPEG
#include <stdint.h> extern struct h264_encoder_impl h264_encoder_ffmpeg_impl;
#include <stdbool.h> #endif
#include <unistd.h>
#include <assert.h>
#include <gbm.h>
#include <xf86drm.h>
#include <aml.h>
#include <libavcodec/avcodec.h> #ifdef HAVE_V4L2
#include <libavutil/hwcontext.h> extern struct h264_encoder_impl h264_encoder_v4l2m2m_impl;
#include <libavutil/hwcontext_drm.h> #endif
#include <libavutil/pixdesc.h>
#include <libavutil/dict.h>
#include <libavfilter/avfilter.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libdrm/drm_fourcc.h>
struct h264_encoder;
struct fb_queue_entry {
struct nvnc_fb* fb;
TAILQ_ENTRY(fb_queue_entry) link;
};
TAILQ_HEAD(fb_queue, fb_queue_entry);
struct h264_encoder {
h264_encoder_packet_handler_fn on_packet_ready;
void* userdata;
uint32_t width;
uint32_t height;
uint32_t format;
AVRational timebase;
AVRational sample_aspect_ratio;
enum AVPixelFormat av_pixel_format;
/* type: AVHWDeviceContext */
AVBufferRef* hw_device_ctx;
/* type: AVHWFramesContext */
AVBufferRef* hw_frames_ctx;
AVCodecContext* codec_ctx;
AVFilterGraph* filter_graph;
AVFilterContext* filter_in;
AVFilterContext* filter_out;
bool next_frame_should_be_keyframe;
struct fb_queue fb_queue;
struct aml_work* work;
struct nvnc_fb* current_fb;
struct vec current_packet;
bool current_frame_is_keyframe;
bool please_destroy;
};
static enum AVPixelFormat drm_to_av_pixel_format(uint32_t format)
{
switch (format) {
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
return AV_PIX_FMT_BGR0;
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ABGR8888:
return AV_PIX_FMT_RGB0;
case DRM_FORMAT_RGBX8888:
case DRM_FORMAT_RGBA8888:
return AV_PIX_FMT_0BGR;
case DRM_FORMAT_BGRX8888:
case DRM_FORMAT_BGRA8888:
return AV_PIX_FMT_0RGB;
}
return AV_PIX_FMT_NONE;
}
static void hw_frame_desc_free(void* opaque, uint8_t* data)
{
struct AVDRMFrameDescriptor* desc = (void*)data;
assert(desc);
for (int i = 0; i < desc->nb_objects; ++i)
close(desc->objects[i].fd);
free(desc);
}
// TODO: Maybe do this once per frame inside nvnc_fb?
static AVFrame* fb_to_avframe(struct nvnc_fb* fb)
{
struct gbm_bo* bo = fb->bo;
int n_planes = gbm_bo_get_plane_count(bo);
AVDRMFrameDescriptor* desc = calloc(1, sizeof(*desc));
desc->nb_objects = n_planes;
desc->nb_layers = 1;
desc->layers[0].format = gbm_bo_get_format(bo);
desc->layers[0].nb_planes = n_planes;
for (int i = 0; i < n_planes; ++i) {
uint32_t stride = gbm_bo_get_stride_for_plane(bo, i);
desc->objects[i].fd = gbm_bo_get_fd_for_plane(bo, i);
desc->objects[i].size = stride * fb->height;
desc->objects[i].format_modifier = gbm_bo_get_modifier(bo);
desc->layers[0].format = gbm_bo_get_format(bo);
desc->layers[0].planes[i].object_index = i;
desc->layers[0].planes[i].offset = gbm_bo_get_offset(bo, i);
desc->layers[0].planes[i].pitch = stride;
}
AVFrame* frame = av_frame_alloc();
if (!frame) {
hw_frame_desc_free(NULL, (void*)desc);
return NULL;
}
frame->opaque = fb;
frame->width = fb->width;
frame->height = fb->height;
frame->format = AV_PIX_FMT_DRM_PRIME;
frame->sample_aspect_ratio = (AVRational){1, 1};
AVBufferRef* desc_ref = av_buffer_create((void*)desc, sizeof(*desc),
hw_frame_desc_free, NULL, 0);
if (!desc_ref) {
hw_frame_desc_free(NULL, (void*)desc);
av_frame_free(&frame);
return NULL;
}
frame->buf[0] = desc_ref;
frame->data[0] = (void*)desc_ref->data;
// TODO: Set colorspace?
return frame;
}
static struct nvnc_fb* fb_queue_dequeue(struct fb_queue* queue)
{
if (TAILQ_EMPTY(queue))
return NULL;
struct fb_queue_entry* entry = TAILQ_FIRST(queue);
TAILQ_REMOVE(queue, entry, link);
struct nvnc_fb* fb = entry->fb;
free(entry);
return fb;
}
static int fb_queue_enqueue(struct fb_queue* queue, struct nvnc_fb* fb)
{
struct fb_queue_entry* entry = calloc(1, sizeof(*entry));
if (!entry)
return -1;
entry->fb = fb;
nvnc_fb_ref(fb);
TAILQ_INSERT_TAIL(queue, entry, link);
return 0;
}
static int h264_encoder__init_buffersrc(struct h264_encoder* self)
{
int rc;
/* Placeholder values are used to pacify input checking and the real
* values are set below.
*/
rc = avfilter_graph_create_filter(&self->filter_in,
avfilter_get_by_name("buffer"), "in",
"width=1:height=1:pix_fmt=drm_prime:time_base=1/1", NULL,
self->filter_graph);
if (rc != 0)
return -1;
AVBufferSrcParameters *params = av_buffersrc_parameters_alloc();
if (!params)
return -1;
params->format = AV_PIX_FMT_DRM_PRIME;
params->width = self->width;
params->height = self->height;
params->sample_aspect_ratio = self->sample_aspect_ratio;
params->time_base = self->timebase;
params->hw_frames_ctx = self->hw_frames_ctx;
rc = av_buffersrc_parameters_set(self->filter_in, params);
assert(rc == 0);
av_free(params);
return 0;
}
static int h264_encoder__init_filters(struct h264_encoder* self)
{
int rc;
self->filter_graph = avfilter_graph_alloc();
if (!self->filter_graph)
return -1;
rc = h264_encoder__init_buffersrc(self);
if (rc != 0)
goto failure;
rc = avfilter_graph_create_filter(&self->filter_out,
avfilter_get_by_name("buffersink"), "out", NULL,
NULL, self->filter_graph);
if (rc != 0)
goto failure;
AVFilterInOut* inputs = avfilter_inout_alloc();
if (!inputs)
goto failure;
inputs->name = av_strdup("in");
inputs->filter_ctx = self->filter_in;
inputs->pad_idx = 0;
inputs->next = NULL;
AVFilterInOut* outputs = avfilter_inout_alloc();
if (!outputs) {
avfilter_inout_free(&inputs);
goto failure;
}
outputs->name = av_strdup("out");
outputs->filter_ctx = self->filter_out;
outputs->pad_idx = 0;
outputs->next = NULL;
rc = avfilter_graph_parse(self->filter_graph,
"hwmap=mode=direct:derive_device=vaapi"
",scale_vaapi=format=nv12:mode=fast",
outputs, inputs, NULL);
if (rc != 0)
goto failure;
assert(self->hw_device_ctx);
for (unsigned int i = 0; i < self->filter_graph->nb_filters; ++i) {
self->filter_graph->filters[i]->hw_device_ctx =
av_buffer_ref(self->hw_device_ctx);
}
rc = avfilter_graph_config(self->filter_graph, NULL);
if (rc != 0)
goto failure;
return 0;
failure:
avfilter_graph_free(&self->filter_graph);
return -1;
}
static int h264_encoder__init_codec_context(struct h264_encoder* self,
const AVCodec* codec, int quality)
{
self->codec_ctx = avcodec_alloc_context3(codec);
if (!self->codec_ctx)
return -1;
struct AVCodecContext* c = self->codec_ctx;
c->width = self->width;
c->height = self->height;
c->time_base = self->timebase;
c->sample_aspect_ratio = self->sample_aspect_ratio;
c->pix_fmt = AV_PIX_FMT_VAAPI;
c->gop_size = INT32_MAX; /* We'll select key frames manually */
c->max_b_frames = 0; /* B-frames are bad for latency */
c->global_quality = quality;
/* open-h264 requires baseline profile, so we use constrained
* baseline.
*/
c->profile = 578;
return 0;
}
static int h264_encoder__init_hw_frames_context(struct h264_encoder* self)
{
self->hw_frames_ctx = av_hwframe_ctx_alloc(self->hw_device_ctx);
if (!self->hw_frames_ctx)
return -1;
AVHWFramesContext* c = (AVHWFramesContext*)self->hw_frames_ctx->data;
c->format = AV_PIX_FMT_DRM_PRIME;
c->sw_format = drm_to_av_pixel_format(self->format);
c->width = self->width;
c->height = self->height;
if (av_hwframe_ctx_init(self->hw_frames_ctx) < 0)
av_buffer_unref(&self->hw_frames_ctx);
return 0;
}
static int h264_encoder__schedule_work(struct h264_encoder* self)
{
if (self->current_fb)
return 0;
self->current_fb = fb_queue_dequeue(&self->fb_queue);
if (!self->current_fb)
return 0;
DTRACE_PROBE1(neatvnc, h264_encode_frame_begin, self->current_fb->pts);
self->current_frame_is_keyframe = self->next_frame_should_be_keyframe;
self->next_frame_should_be_keyframe = false;
return aml_start(aml_get_default(), self->work);
}
static int h264_encoder__encode(struct h264_encoder* self, AVFrame* frame_in)
{
int rc;
rc = av_buffersrc_add_frame_flags(self->filter_in, frame_in,
AV_BUFFERSRC_FLAG_KEEP_REF);
if (rc != 0)
return -1;
AVFrame* filtered_frame = av_frame_alloc();
if (!filtered_frame)
return -1;
rc = av_buffersink_get_frame(self->filter_out, filtered_frame);
if (rc != 0)
goto get_frame_failure;
rc = avcodec_send_frame(self->codec_ctx, filtered_frame);
if (rc != 0)
goto send_frame_failure;
AVPacket* packet = av_packet_alloc();
assert(packet); // TODO
while (1) {
rc = avcodec_receive_packet(self->codec_ctx, packet);
if (rc != 0)
break;
vec_append(&self->current_packet, packet->data, packet->size);
packet->stream_index = 0;
av_packet_unref(packet);
}
// Frame should always start with a zero:
assert(self->current_packet.len == 0 ||
((char*)self->current_packet.data)[0] == 0);
av_packet_free(&packet);
send_frame_failure:
av_frame_unref(filtered_frame);
get_frame_failure:
av_frame_free(&filtered_frame);
return rc == AVERROR(EAGAIN) ? 0 : rc;
}
static void h264_encoder__do_work(void* handle)
{
struct h264_encoder* self = aml_get_userdata(handle);
AVFrame* frame = fb_to_avframe(self->current_fb);
assert(frame); // TODO
frame->hw_frames_ctx = av_buffer_ref(self->hw_frames_ctx);
if (self->current_frame_is_keyframe) {
frame->key_frame = 1;
frame->pict_type = AV_PICTURE_TYPE_I;
} else {
frame->key_frame = 0;
frame->pict_type = AV_PICTURE_TYPE_P;
}
int rc = h264_encoder__encode(self, frame);
if (rc != 0) {
char err[256];
av_strerror(rc, err, sizeof(err));
nvnc_log(NVNC_LOG_ERROR, "Failed to encode packet: %s", err);
goto failure;
}
failure:
av_frame_unref(frame);
av_frame_free(&frame);
}
static void h264_encoder__on_work_done(void* handle)
{
struct h264_encoder* self = aml_get_userdata(handle);
uint64_t pts = nvnc_fb_get_pts(self->current_fb);
nvnc_fb_release(self->current_fb);
nvnc_fb_unref(self->current_fb);
self->current_fb = NULL;
DTRACE_PROBE1(neatvnc, h264_encode_frame_end, pts);
if (self->please_destroy) {
vec_destroy(&self->current_packet);
h264_encoder_destroy(self);
return;
}
if (self->current_packet.len == 0) {
nvnc_log(NVNC_LOG_WARNING, "Whoops, encoded packet length is 0");
return;
}
void* userdata = self->userdata;
// Must make a copy of packet because the callback might destroy the
// encoder object.
struct vec packet;
vec_init(&packet, self->current_packet.len);
vec_append(&packet, self->current_packet.data,
self->current_packet.len);
vec_clear(&self->current_packet);
h264_encoder__schedule_work(self);
self->on_packet_ready(packet.data, packet.len, pts, userdata);
vec_destroy(&packet);
}
static int find_render_node(char *node, size_t maxlen) {
bool r = -1;
drmDevice *devices[64];
int n = drmGetDevices2(0, devices, sizeof(devices) / sizeof(devices[0]));
for (int i = 0; i < n; ++i) {
drmDevice *dev = devices[i];
if (!(dev->available_nodes & (1 << DRM_NODE_RENDER)))
continue;
strncpy(node, dev->nodes[DRM_NODE_RENDER], maxlen);
node[maxlen - 1] = '\0';
r = 0;
break;
}
drmFreeDevices(devices, n);
return r;
}
struct h264_encoder* h264_encoder_create(uint32_t width, uint32_t height, struct h264_encoder* h264_encoder_create(uint32_t width, uint32_t height,
uint32_t format, int quality) uint32_t format, int quality)
{ {
int rc; struct h264_encoder* encoder = NULL;
struct h264_encoder* self = calloc(1, sizeof(*self)); #ifdef HAVE_V4L2
if (!self) encoder = h264_encoder_v4l2m2m_impl.create(width, height, format, quality);
return NULL; if (encoder) {
return encoder;
}
#endif
if (vec_init(&self->current_packet, 65536) < 0) #ifdef HAVE_FFMPEG
goto packet_failure; encoder = h264_encoder_ffmpeg_impl.create(width, height, format, quality);
if (encoder) {
return encoder;
}
#endif
self->work = aml_work_new(h264_encoder__do_work, return encoder;
h264_encoder__on_work_done, self, NULL);
if (!self->work)
goto worker_failure;
char render_node[64];
if (find_render_node(render_node, sizeof(render_node)) < 0)
goto render_node_failure;
rc = av_hwdevice_ctx_create(&self->hw_device_ctx,
AV_HWDEVICE_TYPE_DRM, render_node, NULL, 0);
if (rc != 0)
goto hwdevice_ctx_failure;
self->next_frame_should_be_keyframe = true;
TAILQ_INIT(&self->fb_queue);
self->width = width;
self->height = height;
self->format = format;
self->timebase = (AVRational){1, 1000000};
self->sample_aspect_ratio = (AVRational){1, 1};
self->av_pixel_format = drm_to_av_pixel_format(format);
if (self->av_pixel_format == AV_PIX_FMT_NONE)
goto pix_fmt_failure;
const AVCodec* codec = avcodec_find_encoder_by_name("h264_vaapi");
if (!codec)
goto codec_failure;
if (h264_encoder__init_hw_frames_context(self) < 0)
goto hw_frames_context_failure;
if (h264_encoder__init_filters(self) < 0)
goto filter_failure;
if (h264_encoder__init_codec_context(self, codec, quality) < 0)
goto codec_context_failure;
self->codec_ctx->hw_frames_ctx =
av_buffer_ref(self->filter_out->inputs[0]->hw_frames_ctx);
AVDictionary *opts = NULL;
av_dict_set_int(&opts, "async_depth", 1, 0);
rc = avcodec_open2(self->codec_ctx, codec, &opts);
av_dict_free(&opts);
if (rc != 0)
goto avcodec_open_failure;
return self;
avcodec_open_failure:
avcodec_free_context(&self->codec_ctx);
codec_context_failure:
filter_failure:
av_buffer_unref(&self->hw_frames_ctx);
hw_frames_context_failure:
codec_failure:
pix_fmt_failure:
av_buffer_unref(&self->hw_device_ctx);
hwdevice_ctx_failure:
render_node_failure:
aml_unref(self->work);
worker_failure:
vec_destroy(&self->current_packet);
packet_failure:
free(self);
return NULL;
} }
void h264_encoder_destroy(struct h264_encoder* self) void h264_encoder_destroy(struct h264_encoder* self)
{ {
if (self->current_fb) { self->impl->destroy(self);
self->please_destroy = true;
return;
}
vec_destroy(&self->current_packet);
av_buffer_unref(&self->hw_frames_ctx);
avcodec_free_context(&self->codec_ctx);
av_buffer_unref(&self->hw_device_ctx);
avfilter_graph_free(&self->filter_graph);
aml_unref(self->work);
free(self);
} }
void h264_encoder_set_packet_handler_fn(struct h264_encoder* self, void h264_encoder_set_packet_handler_fn(struct h264_encoder* self,
h264_encoder_packet_handler_fn value) h264_encoder_packet_handler_fn fn)
{ {
self->on_packet_ready = value; self->on_packet_ready = fn;
} }
void h264_encoder_set_userdata(struct h264_encoder* self, void* value) void h264_encoder_set_userdata(struct h264_encoder* self, void* userdata)
{ {
self->userdata = value; self->userdata = userdata;
}
void h264_encoder_feed(struct h264_encoder* self, struct nvnc_fb* fb)
{
self->impl->feed(self, fb);
} }
void h264_encoder_request_keyframe(struct h264_encoder* self) void h264_encoder_request_keyframe(struct h264_encoder* self)
{ {
self->next_frame_should_be_keyframe = true; self->next_frame_should_be_keyframe = true;
} }
void h264_encoder_feed(struct h264_encoder* self, struct nvnc_fb* fb)
{
assert(fb->type == NVNC_FB_GBM_BO);
// TODO: Add transform filter
assert(fb->transform == NVNC_TRANSFORM_NORMAL);
int rc = fb_queue_enqueue(&self->fb_queue, fb);
assert(rc == 0); // TODO
nvnc_fb_hold(fb);
rc = h264_encoder__schedule_work(self);
assert(rc == 0); // TODO
}

View File

@ -25,6 +25,7 @@
#include <stdarg.h> #include <stdarg.h>
#include <string.h> #include <string.h>
#include <ctype.h> #include <ctype.h>
#include <threads.h>
#ifdef HAVE_LIBAVUTIL #ifdef HAVE_LIBAVUTIL
#include <libavutil/avutil.h> #include <libavutil/avutil.h>
@ -32,10 +33,8 @@
#define EXPORT __attribute__((visibility("default"))) #define EXPORT __attribute__((visibility("default")))
static void default_logger(const struct nvnc_log_data* meta, static nvnc_log_fn log_fn = nvnc_default_logger;
const char* message); static thread_local nvnc_log_fn thread_local_log_fn = NULL;
static nvnc_log_fn log_fn = default_logger;
#ifndef NDEBUG #ifndef NDEBUG
static enum nvnc_log_level log_level = NVNC_LOG_DEBUG; static enum nvnc_log_level log_level = NVNC_LOG_DEBUG;
@ -45,6 +44,11 @@ static enum nvnc_log_level log_level = NVNC_LOG_WARNING;
static bool is_initialised = false; static bool is_initialised = false;
static nvnc_log_fn get_log_fn(void)
{
return thread_local_log_fn ? thread_local_log_fn : log_fn;
}
static char* trim_left(char* str) static char* trim_left(char* str)
{ {
while (isspace(*str)) while (isspace(*str))
@ -100,14 +104,15 @@ static void nvnc__vlog(const struct nvnc_log_data* meta, const char* fmt,
if (meta->level <= log_level) { if (meta->level <= log_level) {
vsnprintf(message, sizeof(message), fmt, args); vsnprintf(message, sizeof(message), fmt, args);
log_fn(meta, trim(message)); get_log_fn()(meta, trim(message));
} }
if (meta->level == NVNC_LOG_PANIC) if (meta->level == NVNC_LOG_PANIC)
abort(); abort();
} }
static void default_logger(const struct nvnc_log_data* meta, EXPORT
void nvnc_default_logger(const struct nvnc_log_data* meta,
const char* message) const char* message)
{ {
const char* level = log_level_to_string(meta->level); const char* level = log_level_to_string(meta->level);
@ -178,7 +183,13 @@ void nvnc_set_log_level(enum nvnc_log_level level)
EXPORT EXPORT
void nvnc_set_log_fn(nvnc_log_fn fn) void nvnc_set_log_fn(nvnc_log_fn fn)
{ {
log_fn = fn; log_fn = fn ? fn : nvnc_default_logger;
}
EXPORT
void nvnc_set_log_fn_thread_local(nvnc_log_fn fn)
{
thread_local_log_fn = fn;
} }
EXPORT EXPORT

View File

@ -19,6 +19,7 @@
#include <stdlib.h> #include <stdlib.h>
#include <assert.h> #include <assert.h>
#include <libdrm/drm_fourcc.h> #include <libdrm/drm_fourcc.h>
#include <math.h>
#define POPCOUNT(x) __builtin_popcount(x) #define POPCOUNT(x) __builtin_popcount(x)
#define UDIV_UP(a, b) (((a) + (b) - 1) / (b)) #define UDIV_UP(a, b) (((a) + (b) - 1) / (b))
@ -636,9 +637,6 @@ const char* drm_format_to_string(uint32_t fmt)
// Not exact, but close enough for debugging // Not exact, but close enough for debugging
const char* rfb_pixfmt_to_string(const struct rfb_pixel_format* fmt) const char* rfb_pixfmt_to_string(const struct rfb_pixel_format* fmt)
{ {
if (!(fmt->red_max == fmt->green_max && fmt->red_max == fmt->blue_max))
goto failure;
uint32_t profile = (fmt->red_shift << 16) | (fmt->green_shift << 8) uint32_t profile = (fmt->red_shift << 16) | (fmt->green_shift << 8)
| (fmt->blue_shift); | (fmt->blue_shift);
@ -657,9 +655,25 @@ const char* rfb_pixfmt_to_string(const struct rfb_pixel_format* fmt)
CASE(8, 4, 0): return "XRGB4444"; CASE(8, 4, 0): return "XRGB4444";
CASE(0, 4, 8): return "XBGR4444"; CASE(0, 4, 8): return "XBGR4444";
CASE(11, 5, 0): return "RGB565"; CASE(11, 5, 0): return "RGB565";
CASE(5, 2, 0): return "RGB332";
CASE(0, 2, 5): return "RGB332";
CASE(4, 2, 0): return "RGB222";
CASE(0, 2, 4): return "BGR222";
#undef CASE #undef CASE
} }
failure:
return "UNKNOWN"; return "UNKNOWN";
} }
void make_rgb332_pal8_map(struct rfb_set_colour_map_entries_msg* msg)
{
msg->type = RFB_SERVER_TO_CLIENT_SET_COLOUR_MAP_ENTRIES;
msg->padding = 0;
msg->first_colour = htons(0);
msg->n_colours = htons(256);
for (unsigned int i = 0; i < 256; ++i) {
msg->colours[i].r = htons(round(65535.0 / 7.0 * ((i >> 5) & 7)));
msg->colours[i].g = htons(round(65535.0 / 7.0 * ((i >> 2) & 7)));
msg->colours[i].b = htons(round(65535.0 / 3.0 * (i & 3)));
}
}

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2019 - 2022 Andri Yngvason * Copyright (c) 2019 - 2024 Andri Yngvason
* *
* Permission to use, copy, modify, and/or distribute this software for any * Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -77,15 +77,17 @@
static int send_desktop_resize(struct nvnc_client* client, struct nvnc_fb* fb); static int send_desktop_resize(struct nvnc_client* client, struct nvnc_fb* fb);
static bool send_ext_support_frame(struct nvnc_client* client); static bool send_ext_support_frame(struct nvnc_client* client);
static enum rfb_encodings choose_frame_encoding(struct nvnc_client* client, static enum rfb_encodings choose_frame_encoding(struct nvnc_client* client,
struct nvnc_fb*); const struct nvnc_fb*);
static void on_encode_frame_done(struct encoder*, struct rcbuf*, uint64_t pts); static void on_encode_frame_done(struct encoder*, struct rcbuf*, uint64_t pts);
static bool client_has_encoding(const struct nvnc_client* client, static bool client_has_encoding(const struct nvnc_client* client,
enum rfb_encodings encoding); enum rfb_encodings encoding);
static void process_fb_update_requests(struct nvnc_client* client); static void process_fb_update_requests(struct nvnc_client* client);
static void sockaddr_to_string(char* dst, size_t sz,
const struct sockaddr* addr);
static const char* encoding_to_string(enum rfb_encodings encoding);
static bool client_send_led_state(struct nvnc_client* client);
#if defined(GIT_VERSION) #if defined(PROJECT_VERSION)
EXPORT const char nvnc_version[] = GIT_VERSION;
#elif defined(PROJECT_VERSION)
EXPORT const char nvnc_version[] = PROJECT_VERSION; EXPORT const char nvnc_version[] = PROJECT_VERSION;
#else #else
EXPORT const char nvnc_version[] = "UNKNOWN"; EXPORT const char nvnc_version[] = "UNKNOWN";
@ -141,6 +143,8 @@ static void client_close(struct nvnc_client* client)
client->encoder->on_done = NULL; client->encoder->on_done = NULL;
} }
encoder_unref(client->encoder); encoder_unref(client->encoder);
encoder_unref(client->zrle_encoder);
encoder_unref(client->tight_encoder);
pixman_region_fini(&client->damage); pixman_region_fini(&client->damage);
free(client->cut_text.buffer); free(client->cut_text.buffer);
free(client); free(client);
@ -266,8 +270,15 @@ static int on_version_message(struct nvnc_client* client)
} }
static int security_handshake_failed(struct nvnc_client* client, static int security_handshake_failed(struct nvnc_client* client,
const char* reason_string) const char* username, const char* reason_string)
{ {
if (username)
nvnc_log(NVNC_LOG_INFO, "Security handshake failed for \"%s\": %s",
username, reason_string);
else
nvnc_log(NVNC_LOG_INFO, "Security handshake: %s",
username, reason_string);
char buffer[256]; char buffer[256];
client->state = VNC_CLIENT_STATE_ERROR; client->state = VNC_CLIENT_STATE_ERROR;
@ -288,8 +299,15 @@ static int security_handshake_failed(struct nvnc_client* client,
return 0; return 0;
} }
static int security_handshake_ok(struct nvnc_client* client) static int security_handshake_ok(struct nvnc_client* client, const char* username)
{ {
if (username) {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" authenticated", username);
strncpy(client->username, username, sizeof(client->username));
client->username[sizeof(client->username) - 1] = '\0';
}
uint32_t result = htonl(RFB_SECURITY_HANDSHAKE_OK); uint32_t result = htonl(RFB_SECURITY_HANDSHAKE_OK);
return stream_write(client->net_stream, &result, sizeof(result), NULL, return stream_write(client->net_stream, &result, sizeof(result), NULL,
NULL); NULL);
@ -326,7 +344,8 @@ static int on_vencrypt_version_message(struct nvnc_client* client)
return 0; return 0;
if (msg->major != 0 || msg->minor != 2) { if (msg->major != 0 || msg->minor != 2) {
security_handshake_failed(client, "Unsupported VeNCrypt version"); security_handshake_failed(client, NULL,
"Unsupported VeNCrypt version");
return sizeof(*msg); return sizeof(*msg);
} }
@ -395,16 +414,12 @@ static int on_vencrypt_plain_auth_message(struct nvnc_client* client)
username[MIN(ulen, sizeof(username) - 1)] = '\0'; username[MIN(ulen, sizeof(username) - 1)] = '\0';
password[MIN(plen, sizeof(password) - 1)] = '\0'; password[MIN(plen, sizeof(password) - 1)] = '\0';
strncpy(client->username, username, sizeof(client->username));
client->username[sizeof(client->username) - 1] = '\0';
if (server->auth_fn(username, password, server->auth_ud)) { if (server->auth_fn(username, password, server->auth_ud)) {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" authenticated", username); security_handshake_ok(client, username);
security_handshake_ok(client);
client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT; client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT;
} else { } else {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" rejected", username); security_handshake_failed(client, username,
security_handshake_failed(client, "Invalid username or password"); "Invalid username or password");
} }
return sizeof(*msg) + ulen + plen; return sizeof(*msg) + ulen + plen;
@ -488,13 +503,11 @@ static int on_apple_dh_response(struct nvnc_client* client)
crypto_cipher_del(cipher); crypto_cipher_del(cipher);
if (server->auth_fn(username, password, server->auth_ud)) { if (server->auth_fn(username, password, server->auth_ud)) {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" authenticated", username); security_handshake_ok(client, username);
security_handshake_ok(client);
client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT; client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT;
} else { } else {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" rejected", username); security_handshake_failed(client, username,
security_handshake_failed(client, "Invalid username or password"); "Invalid username or password");
crypto_cipher_del(cipher);
} }
return sizeof(*msg) + key_len; return sizeof(*msg) + key_len;
@ -766,12 +779,11 @@ static int on_rsa_aes_credentials(struct nvnc_client* client)
password[password_len] = '\0'; password[password_len] = '\0';
if (server->auth_fn(username, password, server->auth_ud)) { if (server->auth_fn(username, password, server->auth_ud)) {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" authenticated", username); security_handshake_ok(client, username);
security_handshake_ok(client);
client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT; client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT;
} else { } else {
nvnc_log(NVNC_LOG_INFO, "User \"%s\" rejected", username); security_handshake_failed(client, username,
security_handshake_failed(client, "Invalid username or password"); "Invalid username or password");
} }
return 2 + username_len + password_len; return 2 + username_len + password_len;
@ -789,7 +801,7 @@ static int on_security_message(struct nvnc_client* client)
switch (type) { switch (type) {
case RFB_SECURITY_TYPE_NONE: case RFB_SECURITY_TYPE_NONE:
security_handshake_ok(client); security_handshake_ok(client, NULL);
client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT; client->state = VNC_CLIENT_STATE_WAITING_FOR_INIT;
break; break;
#ifdef ENABLE_TLS #ifdef ENABLE_TLS
@ -819,7 +831,8 @@ static int on_security_message(struct nvnc_client* client)
break; break;
#endif #endif
default: default:
security_handshake_failed(client, "Unsupported security type"); security_handshake_failed(client, NULL,
"Unsupported security type");
break; break;
} }
@ -912,6 +925,30 @@ static int on_init_message(struct nvnc_client* client)
return sizeof(shared_flag); return sizeof(shared_flag);
} }
static int cook_pixel_map(struct nvnc_client* client)
{
struct rfb_pixel_format* fmt = &client->pixfmt;
// We'll just pretend that this is rgb332
fmt->true_colour_flag = true;
fmt->big_endian_flag = false;
fmt->bits_per_pixel = 8;
fmt->depth = 8;
fmt->red_max = 7;
fmt->green_max = 7;
fmt->blue_max = 3;
fmt->red_shift = 5;
fmt->green_shift = 2;
fmt->blue_shift = 0;
uint8_t buf[sizeof(struct rfb_set_colour_map_entries_msg)
+ 256 * sizeof(struct rfb_colour_map_entry)];
struct rfb_set_colour_map_entries_msg* msg =
(struct rfb_set_colour_map_entries_msg*)buf;
make_rgb332_pal8_map(msg);
return stream_write(client->net_stream, buf, sizeof(buf), NULL, NULL);
}
static int on_client_set_pixel_format(struct nvnc_client* client) static int on_client_set_pixel_format(struct nvnc_client* client)
{ {
if (client->buffer_len - client->buffer_index < if (client->buffer_len - client->buffer_index <
@ -922,30 +959,49 @@ static int on_client_set_pixel_format(struct nvnc_client* client)
(struct rfb_pixel_format*)(client->msg_buffer + (struct rfb_pixel_format*)(client->msg_buffer +
client->buffer_index + 4); client->buffer_index + 4);
if (!fmt->true_colour_flag) { if (fmt->true_colour_flag) {
/* We don't really know what to do with color maps right now */ nvnc_log(NVNC_LOG_DEBUG, "Using color palette for client %p",
nvnc_client_close(client); client);
return 0;
}
fmt->red_max = ntohs(fmt->red_max); fmt->red_max = ntohs(fmt->red_max);
fmt->green_max = ntohs(fmt->green_max); fmt->green_max = ntohs(fmt->green_max);
fmt->blue_max = ntohs(fmt->blue_max); fmt->blue_max = ntohs(fmt->blue_max);
memcpy(&client->pixfmt, fmt, sizeof(client->pixfmt)); memcpy(&client->pixfmt, fmt, sizeof(client->pixfmt));
} else {
nvnc_log(NVNC_LOG_DEBUG, "Using color palette for client %p",
client);
cook_pixel_map(client);
}
client->has_pixfmt = true; client->has_pixfmt = true;
client->formats_changed = true;
nvnc_log(NVNC_LOG_DEBUG, "Client %p chose pixel format: %s", client,
rfb_pixfmt_to_string(&client->pixfmt));
return 4 + sizeof(struct rfb_pixel_format); return 4 + sizeof(struct rfb_pixel_format);
} }
static void encodings_to_string_list(char* dst, size_t len,
enum rfb_encodings* encodings, size_t n)
{
size_t off = 0;
if (n > 0)
off += snprintf(dst, len, "%s",
encoding_to_string(encodings[0]));
for (size_t i = 1; i < n; ++i)
off += snprintf(dst + off, len - off, ",%s",
encoding_to_string(encodings[i]));
}
static int on_client_set_encodings(struct nvnc_client* client) static int on_client_set_encodings(struct nvnc_client* client)
{ {
struct rfb_client_set_encodings_msg* msg = struct rfb_client_set_encodings_msg* msg =
(struct rfb_client_set_encodings_msg*)(client->msg_buffer + (struct rfb_client_set_encodings_msg*)(client->msg_buffer +
client->buffer_index); client->buffer_index);
size_t n_encodings = MIN(MAX_ENCODINGS, ntohs(msg->n_encodings)); size_t n_encodings = ntohs(msg->n_encodings);
size_t n = 0; size_t n = 0;
if (client->buffer_len - client->buffer_index < if (client->buffer_len - client->buffer_index <
@ -954,7 +1010,7 @@ static int on_client_set_encodings(struct nvnc_client* client)
client->quality = 10; client->quality = 10;
for (size_t i = 0; i < n_encodings; ++i) { for (size_t i = 0; i < n_encodings && n < MAX_ENCODINGS; ++i) {
enum rfb_encodings encoding = htonl(msg->encodings[i]); enum rfb_encodings encoding = htonl(msg->encodings[i]);
switch (encoding) { switch (encoding) {
@ -970,9 +1026,18 @@ static int on_client_set_encodings(struct nvnc_client* client)
case RFB_ENCODING_DESKTOPSIZE: case RFB_ENCODING_DESKTOPSIZE:
case RFB_ENCODING_EXTENDEDDESKTOPSIZE: case RFB_ENCODING_EXTENDEDDESKTOPSIZE:
case RFB_ENCODING_QEMU_EXT_KEY_EVENT: case RFB_ENCODING_QEMU_EXT_KEY_EVENT:
case RFB_ENCODING_QEMU_LED_STATE:
case RFB_ENCODING_VMWARE_LED_STATE:
#ifdef ENABLE_EXPERIMENTAL
case RFB_ENCODING_PTS: case RFB_ENCODING_PTS:
case RFB_ENCODING_NTP: case RFB_ENCODING_NTP:
#endif
client->encodings[n++] = encoding; client->encodings[n++] = encoding;
#ifndef ENABLE_EXPERIMENTAL
case RFB_ENCODING_PTS:
case RFB_ENCODING_NTP:
;
#endif
} }
if (RFB_ENCODING_JPEG_LOWQ <= encoding && if (RFB_ENCODING_JPEG_LOWQ <= encoding &&
@ -980,7 +1045,14 @@ static int on_client_set_encodings(struct nvnc_client* client)
client->quality = encoding - RFB_ENCODING_JPEG_LOWQ; client->quality = encoding - RFB_ENCODING_JPEG_LOWQ;
} }
char encoding_list[256] = {};
encodings_to_string_list(encoding_list, sizeof(encoding_list),
client->encodings, n);
nvnc_log(NVNC_LOG_DEBUG, "Client %p set encodings: %s", client,
encoding_list);
client->n_encodings = n; client->n_encodings = n;
client->formats_changed = true;
return sizeof(*msg) + 4 * n_encodings; return sizeof(*msg) + 4 * n_encodings;
} }
@ -1037,14 +1109,79 @@ static const char* encoding_to_string(enum rfb_encodings encoding)
{ {
switch (encoding) { switch (encoding) {
case RFB_ENCODING_RAW: return "raw"; case RFB_ENCODING_RAW: return "raw";
case RFB_ENCODING_COPYRECT: return "copyrect";
case RFB_ENCODING_RRE: return "rre";
case RFB_ENCODING_HEXTILE: return "hextile";
case RFB_ENCODING_TIGHT: return "tight"; case RFB_ENCODING_TIGHT: return "tight";
case RFB_ENCODING_TRLE: return "trle";
case RFB_ENCODING_ZRLE: return "zrle"; case RFB_ENCODING_ZRLE: return "zrle";
case RFB_ENCODING_OPEN_H264: return "open-h264"; case RFB_ENCODING_OPEN_H264: return "open-h264";
case RFB_ENCODING_CURSOR: return "cursor";
case RFB_ENCODING_DESKTOPSIZE: return "desktop-size";
case RFB_ENCODING_EXTENDEDDESKTOPSIZE: return "extended-desktop-size";
case RFB_ENCODING_QEMU_EXT_KEY_EVENT: return "qemu-extended-key-event";
case RFB_ENCODING_QEMU_LED_STATE: return "qemu-led-state";
case RFB_ENCODING_VMWARE_LED_STATE: return "vmware-led-state";
case RFB_ENCODING_PTS: return "pts";
case RFB_ENCODING_NTP: return "ntp";
}
return "UNKNOWN";
}
static bool ensure_encoder(struct nvnc_client* client, const struct nvnc_fb *fb)
{
struct nvnc* server = client->server;
enum rfb_encodings encoding = choose_frame_encoding(client, fb);
if (client->encoder && encoding == encoder_get_type(client->encoder))
return true;
int width = server->display->buffer->width;
int height = server->display->buffer->height;
if (client->encoder) {
server->n_damage_clients -= !(client->encoder->impl->flags &
ENCODER_IMPL_FLAG_IGNORES_DAMAGE);
client->encoder->on_done = NULL;
}
encoder_unref(client->encoder);
/* Zlib streams need to be saved so we keep encoders around that
* use them.
*/
switch (encoding) {
case RFB_ENCODING_ZRLE:
if (!client->zrle_encoder) {
client->zrle_encoder =
encoder_new(encoding, width, height);
}
client->encoder = client->zrle_encoder;
encoder_ref(client->encoder);
break;
case RFB_ENCODING_TIGHT:
if (!client->tight_encoder) {
client->tight_encoder =
encoder_new(encoding, width, height);
}
client->encoder = client->tight_encoder;
encoder_ref(client->encoder);
break;
default: default:
client->encoder = encoder_new(encoding, width, height);
break; break;
} }
return "UNKNOWN"; if (!client->encoder) {
nvnc_log(NVNC_LOG_ERROR, "Failed to allocate new encoder");
return false;
}
server->n_damage_clients += !(client->encoder->impl->flags &
ENCODER_IMPL_FLAG_IGNORES_DAMAGE);
nvnc_log(NVNC_LOG_INFO, "Choosing %s encoding for client %p",
encoding_to_string(encoding), client);
return true;
} }
static void process_fb_update_requests(struct nvnc_client* client) static void process_fb_update_requests(struct nvnc_client* client)
@ -1093,41 +1230,25 @@ static void process_fb_update_requests(struct nvnc_client* client)
return; return;
} }
if (client_send_led_state(client)) {
if (--client->n_pending_requests <= 0)
return;
}
if (!pixman_region_not_empty(&client->damage)) if (!pixman_region_not_empty(&client->damage))
return; return;
DTRACE_PROBE1(neatvnc, update_fb_start, client); if (!ensure_encoder(client, fb))
enum rfb_encodings encoding = choose_frame_encoding(client, fb);
if (!client->encoder || encoding != encoder_get_type(client->encoder)) {
int width = server->display->buffer->width;
int height = server->display->buffer->height;
if (client->encoder) {
server->n_damage_clients -=
!(client->encoder->impl->flags &
ENCODER_IMPL_FLAG_IGNORES_DAMAGE);
client->encoder->on_done = NULL;
}
encoder_unref(client->encoder);
client->encoder = encoder_new(encoding, width, height);
if (!client->encoder) {
nvnc_log(NVNC_LOG_ERROR, "Failed to allocate new encoder");
return; return;
}
server->n_damage_clients += DTRACE_PROBE1(neatvnc, update_fb_start, client);
!(client->encoder->impl->flags &
ENCODER_IMPL_FLAG_IGNORES_DAMAGE);
nvnc_log(NVNC_LOG_INFO, "Choosing %s encoding for client %p",
encoding_to_string(encoding), client);
}
/* The client's damage is exchanged for an empty one */ /* The client's damage is exchanged for an empty one */
struct pixman_region16 damage = client->damage; struct pixman_region16 damage = client->damage;
pixman_region_init(&client->damage); pixman_region_init(&client->damage);
client->is_updating = true; client->is_updating = true;
client->formats_changed = false;
client->current_fb = fb; client->current_fb = fb;
nvnc_fb_hold(fb); nvnc_fb_hold(fb);
nvnc_fb_ref(fb); nvnc_fb_ref(fb);
@ -1149,6 +1270,7 @@ static void process_fb_update_requests(struct nvnc_client* client)
nvnc_log(NVNC_LOG_ERROR, "Failed to encode current frame"); nvnc_log(NVNC_LOG_ERROR, "Failed to encode current frame");
client_unref(client); client_unref(client);
client->is_updating = false; client->is_updating = false;
client->formats_changed = false;
assert(client->current_fb); assert(client->current_fb);
nvnc_fb_release(client->current_fb); nvnc_fb_release(client->current_fb);
nvnc_fb_unref(client->current_fb); nvnc_fb_unref(client->current_fb);
@ -1673,24 +1795,6 @@ static void on_client_event(struct stream* stream, enum stream_event event)
client->buffer_index = 0; client->buffer_index = 0;
} }
static void record_peer_hostname(int fd, struct nvnc_client* client)
{
struct sockaddr_storage storage;
struct sockaddr* peer = (struct sockaddr*)&storage;
socklen_t peerlen = sizeof(storage);
if (getpeername(fd, peer, &peerlen) == 0) {
if (peer->sa_family == AF_UNIX) {
snprintf(client->hostname, sizeof(client->hostname),
"unix domain socket");
} else {
getnameinfo(peer, peerlen,
client->hostname, sizeof(client->hostname),
NULL, 0, // no need for port
0);
}
}
}
static void on_connection(void* obj) static void on_connection(void* obj)
{ {
struct nvnc* server = aml_get_userdata(obj); struct nvnc* server = aml_get_userdata(obj);
@ -1702,6 +1806,7 @@ static void on_connection(void* obj)
client->ref = 1; client->ref = 1;
client->server = server; client->server = server;
client->quality = 10; /* default to lossless */ client->quality = 10; /* default to lossless */
client->led_state = -1; /* trigger sending of initial state */
int fd = accept(server->fd, NULL, 0); int fd = accept(server->fd, NULL, 0);
if (fd < 0) { if (fd < 0) {
@ -1712,8 +1817,6 @@ static void on_connection(void* obj)
int one = 1; int one = 1;
setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &one, sizeof(one)); setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &one, sizeof(one));
record_peer_hostname(fd, client);
#ifdef ENABLE_WEBSOCKET #ifdef ENABLE_WEBSOCKET
if (server->socket_type == NVNC__SOCKET_WEBSOCKET) if (server->socket_type == NVNC__SOCKET_WEBSOCKET)
{ {
@ -1748,7 +1851,14 @@ static void on_connection(void* obj)
client->state = VNC_CLIENT_STATE_WAITING_FOR_VERSION; client->state = VNC_CLIENT_STATE_WAITING_FOR_VERSION;
nvnc_log(NVNC_LOG_INFO, "New client connection from %s: %p (ref %d)", client->hostname, client, client->ref); char ip_address[256];
struct sockaddr_storage addr;
socklen_t addrlen = sizeof(addr);
nvnc_client_get_address(client, (struct sockaddr*)&addr, &addrlen);
sockaddr_to_string(ip_address, sizeof(ip_address),
(struct sockaddr*)&addr);
nvnc_log(NVNC_LOG_INFO, "New client connection from %s: %p (ref %d)",
ip_address, client, client->ref);
return; return;
@ -1774,6 +1884,11 @@ static void sockaddr_to_string(char* dst, size_t sz, const struct sockaddr* addr
case AF_INET6: case AF_INET6:
inet_ntop(addr->sa_family, &sa_in6->sin6_addr, dst, sz); inet_ntop(addr->sa_family, &sa_in6->sin6_addr, dst, sz);
break; break;
default:
nvnc_log(NVNC_LOG_DEBUG,
"Don't know how to convert sa_family %d to string",
addr->sa_family);
break;
} }
} }
@ -1860,7 +1975,7 @@ static int bind_address_unix(const char* name)
} }
static int bind_address(const char* name, uint16_t port, static int bind_address(const char* name, uint16_t port,
enum nvnc__socket_type type) int fd, enum nvnc__socket_type type)
{ {
switch (type) { switch (type) {
case NVNC__SOCKET_TCP: case NVNC__SOCKET_TCP:
@ -1868,6 +1983,9 @@ static int bind_address(const char* name, uint16_t port,
return bind_address_tcp(name, port); return bind_address_tcp(name, port);
case NVNC__SOCKET_UNIX: case NVNC__SOCKET_UNIX:
return bind_address_unix(name); return bind_address_unix(name);
case NVNC__SOCKET_FROM_FD:
// nothing to bind
return fd;
} }
nvnc_log(NVNC_LOG_PANIC, "Unknown socket address type"); nvnc_log(NVNC_LOG_PANIC, "Unknown socket address type");
@ -1875,7 +1993,7 @@ static int bind_address(const char* name, uint16_t port,
} }
static struct nvnc* open_common(const char* address, uint16_t port, static struct nvnc* open_common(const char* address, uint16_t port,
enum nvnc__socket_type type) int fd, enum nvnc__socket_type type)
{ {
nvnc__log_init(); nvnc__log_init();
@ -1891,7 +2009,7 @@ static struct nvnc* open_common(const char* address, uint16_t port,
LIST_INIT(&self->clients); LIST_INIT(&self->clients);
self->fd = bind_address(address, port, type); self->fd = bind_address(address, port, fd, type);
if (self->fd < 0) if (self->fd < 0)
goto bind_failure; goto bind_failure;
@ -1924,14 +2042,14 @@ bind_failure:
EXPORT EXPORT
struct nvnc* nvnc_open(const char* address, uint16_t port) struct nvnc* nvnc_open(const char* address, uint16_t port)
{ {
return open_common(address, port, NVNC__SOCKET_TCP); return open_common(address, port, -1, NVNC__SOCKET_TCP);
} }
EXPORT EXPORT
struct nvnc* nvnc_open_websocket(const char *address, uint16_t port) struct nvnc* nvnc_open_websocket(const char *address, uint16_t port)
{ {
#ifdef ENABLE_WEBSOCKET #ifdef ENABLE_WEBSOCKET
return open_common(address, port, NVNC__SOCKET_WEBSOCKET); return open_common(address, port, -1, NVNC__SOCKET_WEBSOCKET);
#else #else
return NULL; return NULL;
#endif #endif
@ -1940,7 +2058,13 @@ struct nvnc* nvnc_open_websocket(const char *address, uint16_t port)
EXPORT EXPORT
struct nvnc* nvnc_open_unix(const char* address) struct nvnc* nvnc_open_unix(const char* address)
{ {
return open_common(address, 0, NVNC__SOCKET_UNIX); return open_common(address, 0, -1, NVNC__SOCKET_UNIX);
}
EXPORT
struct nvnc* nvnc_open_from_fd(int fd)
{
return open_common(NULL, 0, fd, NVNC__SOCKET_FROM_FD);
} }
static void unlink_fd_path(int fd) static void unlink_fd_path(int fd)
@ -1996,6 +2120,8 @@ void nvnc_close(struct nvnc* self)
static void complete_fb_update(struct nvnc_client* client) static void complete_fb_update(struct nvnc_client* client)
{ {
if (!client->is_updating)
return;
client->is_updating = false; client->is_updating = false;
assert(client->current_fb); assert(client->current_fb);
nvnc_fb_release(client->current_fb); nvnc_fb_release(client->current_fb);
@ -2003,7 +2129,7 @@ static void complete_fb_update(struct nvnc_client* client)
client->current_fb = NULL; client->current_fb = NULL;
process_fb_update_requests(client); process_fb_update_requests(client);
client_unref(client); client_unref(client);
DTRACE_PROBE2(neatvnc, update_fb_done, client, pts); DTRACE_PROBE1(neatvnc, update_fb_done, client);
} }
static void on_write_frame_done(void* userdata, enum stream_req_status status) static void on_write_frame_done(void* userdata, enum stream_req_status status)
@ -2013,7 +2139,7 @@ static void on_write_frame_done(void* userdata, enum stream_req_status status)
} }
static enum rfb_encodings choose_frame_encoding(struct nvnc_client* client, static enum rfb_encodings choose_frame_encoding(struct nvnc_client* client,
struct nvnc_fb* fb) const struct nvnc_fb* fb)
{ {
for (size_t i = 0; i < client->n_encodings; ++i) for (size_t i = 0; i < client->n_encodings; ++i)
switch (client->encodings[i]) { switch (client->encodings[i]) {
@ -2053,6 +2179,17 @@ static void finish_fb_update(struct nvnc_client* client, struct rcbuf* payload,
if (client->net_stream->state == STREAM_STATE_CLOSED) if (client->net_stream->state == STREAM_STATE_CLOSED)
goto complete; goto complete;
if (client->formats_changed) {
/* Client has requested new pixel format or encoding in the
* meantime, so it probably won't know what to do with this
* frame. Pending requests get incremented because this one is
* dropped.
*/
nvnc_log(NVNC_LOG_DEBUG, "Client changed pixel format or encoding with in-flight buffer");
client->n_pending_requests++;
goto complete;
}
DTRACE_PROBE2(neatvnc, send_fb_start, client, pts); DTRACE_PROBE2(neatvnc, send_fb_start, client, pts);
n_rects += will_send_pts(client, pts) ? 1 : 0; n_rects += will_send_pts(client, pts) ? 1 : 0;
struct rfb_server_fb_update_msg update_msg = { struct rfb_server_fb_update_msg update_msg = {
@ -2266,10 +2403,9 @@ struct nvnc* nvnc_client_get_server(const struct nvnc_client* client)
} }
EXPORT EXPORT
const char* nvnc_client_get_hostname(const struct nvnc_client* client) { int nvnc_client_get_address(const struct nvnc_client* client,
if (client->hostname[0] == '\0') struct sockaddr* restrict addr, socklen_t* restrict addrlen) {
return NULL; return getpeername(client->net_stream->fd, addr, addrlen);
return client->hostname;
} }
EXPORT EXPORT
@ -2309,6 +2445,60 @@ bool nvnc_client_supports_cursor(const struct nvnc_client* client)
return false; return false;
} }
static bool client_send_led_state(struct nvnc_client* client)
{
if (client->pending_led_state == client->led_state)
return false;
bool have_qemu_led_state =
client_has_encoding(client, RFB_ENCODING_QEMU_LED_STATE);
bool have_vmware_led_state =
client_has_encoding(client, RFB_ENCODING_VMWARE_LED_STATE);
if (!have_qemu_led_state && !have_vmware_led_state)
return false;
nvnc_log(NVNC_LOG_DEBUG, "Keyboard LED state changed: %x -> %x",
client->led_state, client->pending_led_state);
struct vec payload;
vec_init(&payload, 4096);
struct rfb_server_fb_update_msg head = {
.type = RFB_SERVER_TO_CLIENT_FRAMEBUFFER_UPDATE,
.n_rects = htons(1),
};
struct rfb_server_fb_rect rect = {
.encoding = htonl(RFB_ENCODING_QEMU_LED_STATE),
};
vec_append(&payload, &head, sizeof(head));
vec_append(&payload, &rect, sizeof(rect));
if (have_qemu_led_state) {
uint8_t data = client->pending_led_state;
vec_append(&payload, &data, sizeof(data));
} else if (have_vmware_led_state) {
uint32_t data = htonl(client->pending_led_state);
vec_append(&payload, &data, sizeof(data));
}
stream_send(client->net_stream, rcbuf_new(payload.data, payload.len),
NULL, NULL);
client->led_state = client->pending_led_state;
return true;
}
EXPORT
void nvnc_client_set_led_state(struct nvnc_client* client,
enum nvnc_keyboard_led_state state)
{
client->pending_led_state = state;
process_fb_update_requests(client);
}
EXPORT EXPORT
void nvnc_set_name(struct nvnc* self, const char* name) void nvnc_set_name(struct nvnc* self, const char* name)
{ {

View File

@ -7,6 +7,7 @@ pixels = executable('pixels',
dependencies: [ dependencies: [
pixman, pixman,
libdrm_inc, libdrm_inc,
libm,
], ],
) )
test('pixels', pixels) test('pixels', pixels)