Skip to content

[pull] 8.0 from mysql:8.0 #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 458 commits into from
Jul 18, 2023
Merged

[pull] 8.0 from mysql:8.0 #2

merged 458 commits into from
Jul 18, 2023

Conversation

pull[bot]
Copy link

@pull pull bot commented Jul 18, 2023

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

weigon and others added 30 commits March 30, 2023 16:42
When a client connects through Router and max_connection
is reached at the server (and a SUPER connection is already
connected to the server), the client gets

  Lost connection to MySQL server during query

or similar.

Background
----------

The client received a Error-Message which was in the 3.21 format
(without SQL-state) even though it expected a Error-message in 4.1
format (with SQL-state). The client discarded the wrong message and
dropped the connection.

Additionally, the error-message was sent twice.

Change
------

- don't forward the Error packet AS IS, but call the 'on-error' handler
  to let it handle it by either recoding the error-msg according to the
  client's capabilities, or retry
- added tests for max-connection errors

Change-Id: I08c278ac342be008d651fad42a8a9ac85082a40f
Addendum patch fixing build break of SectionIterators-t
on Sparc, 7.5 & 7.6 only.

Studio 12.6 Sun 5.14 is used to build and link the
unit test binaries for SectionIterators-t. The linker
seems unable to remove references to methods defined in
included header files, but not really used by the source
files SectionIterators.cpp and NdbApiSignal.cpp.

Thus we had build breaks due to 'undefined symbols' from the
linker.

Patch include the libraries ndbgeneral ndbportlib when linking
SectionIterators-t.

Intended for 7.5 and 7.6 only as 8.0++ use gcc
which is not affected by this problem.

Change-Id: I099ff525856cfa7ad9d3246bdc61c36af75aaaa8
Change-Id: I1e523aa2cb20aca10b2e4445786c09e1b48a265f
Change-Id: I00c0bf5bf0e607d5beeef5c923d9d1a2564e40b7
…s intended to

Test case insert two new rows into the ndb_apply_status table and expect
these inserts to be written to the binlog.

As updates to the ndb_apply_status table is excemptet for what is
counted as an update within an epoch, the test case also updates
the t1 table in order to create a non empty epoch.

However, the t1 update is to the exact same value as the existing one,
thus the update is ignore and the epoch is still empty
 -> No ndb_apply_status updates written to th ebinlog.

Patch change the t1 update operation, such that a real update takes
place

Change-Id: I8a5cdfc5ddf7e4638f7959020b981704cf1db039
Change-Id: If7bc869fa9daff43d1ba1fa9e9f3131f5c246209
Change-Id: I002d961dea96df85f354b0dcad9eed38d9c218f7
uninitialized otherwise.

MySQLSession::set_ssl_options() sets the ssl-related session options on the
session object but also caches them. If set_ssl_options() was never called, the
cache contains uninitialized (partially random) values which may lead to errors
in the using code.

This patch changes the implementation so that the SSL options are never cached.
If needed they are queried from the underlying MYSQL object to reflect the
actual values.

Change-Id: I4b75111fbefab5488e2a2f1e24364ff549651f03
…sses from venv setup

Reservation of ports for secondary engine processes (i.e.,server,
driver) is being done in the routine used to create and setup a
virtual environment. Since port allocation does not depend on the
environment, the patch moves it out to a separate routine and adds
a way to exercise the situation when the environment variable
MTR_SKIP_VIRTUAL_ENV_SETUP is set.

Change-Id: Ib96c5aabb2a36199b0f063b63882d035c9a22639
The locked_row_action member of TABLE::reginfo is unused. Remove it.

Change-Id: I46e4a496fc6c694fdfad2b24a9205435310f5dc7
If an error is returned while initializing the planner arrays
used in join order optimization, optimizer does not reset the
JOIN_TAB structure in TABLE object. This results in problems
for the next query that uses the same setting in TABLE structure
leading to server exit.
Fix is to reset "join_tab" in TABLE::reset.

Change-Id: I2a18a78d14082b3776ecc91943c097c4e1bce0df
Added deprecation warning for keyring_file and
keyring_encrypted_file plugins.

Change-Id: Icbe56aa66e058e8f6425d5c33ebb499599061f49
…e' / 'read key' methods

Starting from Mysql-8.0, the 'xfrm' of a key is only used when we need to
generate a hash value from the key, not for a 'compare' any more.

However, there are still many methods used only for reading keys for compare,
or even general methods reading any kind of attributes, which takes as
'bool xfrm' argument.

This opens up for incorrect usage of these methods and should be avoided.

This patch removes that 'xfrm' argument from methods where it should
always be 'false'. It also cleans up the naming of other methods
used only to read keys for hash generation by adding a '*Hash*' to
their name.

Furthermore, the method Dbtup::readKeyAttributes() is introduced as
a replacement for calling ::readAttribute(). The latter method added
an AttributeHeader to the values being read, which is not needed
when used from Dbtup::tuxReadPk(). Thus it had to remove that
AttriButeHeader, requiring an extra copy of the PK.

Change-Id: I4daf3ee959c4b6a9f6fbbaebf5cc5bb0ffd6bbfb
…gned

In preparation for changing the md5_hash() function, add a unit test to it which
verify that the generated hash result matches a set of prerecorded hash values.

Change-Id: Id94ab5026c151deb374e2ca49efa0e1df03c3e0b
…gned

As described in bug report, there seems to be no valid reasons for
requiring the md5_hash input to be 8-byte aligned anymore.

It seems to just add complexity for the caller, and sometimes
an extra memcpy() may even be needed.

Patch replace the Uint64 accesses in md5_hash with memcpy() instead,
which seems to perform identical to the original DWORD assignments.
(And generate ~identical code as well)

md5_hash() signature is also changed to take a char* instead of
an Uint64*. Inlined variants of md5_hash, taking an Uint32* as
argument, is also provided for convenience and avoiding type casts
where md5_hash is normally called with a key stored in an Uint32[]

Usage of md5_hash is adapted accordingly.

Allocation of temporary buffers for md5_hash as Uint64[] has been
changed to Uint32[] as the Uint64 alignment requirements was removed.

Change-Id: I6089a3af1436d1e73620fabf01b32082519d9b38
Change-Id: I4e2cc602668109ebefa4549b7fc747027ac2ef40
Change-Id: I1a836ee94667d0fe0c87ac98b3b2db23cfc7cbf5
When 'USE schema' is send as a query instead of using the dedicated
command, the server issues two session-trackers:

- schema-changed (which is expected)
- state-changed (which is not expected)

The state-changed tracker blocks connection-sharing

Change
------

- don't block connection-sharing if 'USE' is sent via COM_QUERY
- added a test for USE via COM_QUERY

Change-Id: I69a4e21946b089f1246c7a7fb59869764ffd2927
…o sense

Change the return type of DDTableBuffer::get to vector<byte>.
The code only tracked the allocation of the couple bytes used for
the string object, not the data itself. It was short lived anyways.
It was making the code overcomplicated for no benefit.

Change-Id: I36c591793689bd82ef61836bb87186443c497b17
…CT FROM I_S.PROCESSLIST".

Starting from 8.0.28 version of MySQL Server concurrent execution of
COM_STATISTICS, COM_CHANGE_USER commands and SHOW PROCESSLIST statement
sometimes led to deadlock. Same problem was observed if COM_STATISTICS
was replaced with FLUSH STATUS statement and/or SHOW PROCESSLIST was
replaced with SELECT from I_S.PROCESSLIST table.

The deadlock occured because of regression from fix for bug#32320541
"RACE CONDITION ON SECURITY_CONTEXT::M_USER". After this patch
COM_STATISTICS/FLUSH STATUS, COM_CHANGE_USER and SHOW PROCESSLIST/
SELECT ... FROM I_S.PROCESSLIST started to acquire same locks in
different order.

In particular:
  1) Code responsible for changing user for connection started to acquire
     THD::LOCK_thd_security_ctx mutex and the acquired LOCK_status mutex during
     call to THD::cleanup_connection(), without releasing the former.
  2) Implementations of COM_STATISTICS and FLUSH STATUS commands acquire
     LOCK_status mutex and then during iteration through all connections
     LOCK_thd_remove mutexes without releasing the former.
  3) Finally, SHOW PROCESSLIST/I_S.PROCESSLIST implementation acquired
     LOCK_thd_remove mutexes and then THD::LOCK_thd_security_ctx mutex
     during copying information about particular connection, without
     releasing the former.

Naturally, THD::LOCK_thd_security_ctx -> LOCK_status -> LOCK_thd_remove ->
THD::LOCK_thd_security_ctx dependency loop occasionally resulted in
deadlock.

This patch solves the problem by reducing scope during which
THD::LOCK_thd_security_ctx lock is held in COM_CHANGE_USER implementation.
We no longer call THD::cleanup_connection()/lock LOCK_status while
holding it, thus breaking dependency loop.

Thanks for the contribution [email protected] - Bug#110494.

Change-Id: If6bbd8a94a0671abcbfbce3afdaa5cebd28ac4f9
With a earlier change the GreetingForwarder doesn't send an Error
message itself, but passes it up to the caller which may or may not send
it to the client.

But the log-message will still be logged even if it is not sent to the
client.

Change
------

- move the logging of the debug message to the place where error-msg is
  actually sent.

Change-Id: Ic71ca4d4676008d1f1e56f002dda230faca2f396
Tor Didriksen and others added 28 commits June 1, 2023 15:17
Set LANG=C in the environment when executing readelf, to avoid any
problems with non-ascii output.

Patch is based on a contribution from Kento Takeuchi.

Change-Id: I2a7e4dead3208aa5bb65f7d86b766e76fbb7b9c5
(cherry picked from commit 37a5f2c7a195d021186e40eef9738646e87ead74)
Description:
- Added new service to check for changed characters in password
- Updated component_validate_password and implemented new service
- Updated server component to use the server when current
  password is provided while changing the password
- Added test cases

Change-Id: I2177e3861fccef965d7e00e6267516db20281e1b
…change

In a GR setup, if a source of transactions exist besides the applier
channel then the following can happen:

 - There are several transactions being applied locally, already
   certified and so associated to a ticket, lets say ticket 2.
   These transactions did not yet commit.
   Note that these can be local transaction or come from an async
   channel for example

 - A view happens that has ticket 3 and that has to wait on
   transactions from ticket 2.

 - The view change (VC1) enters the GR applier channel applier and
   gets stuck there waiting for the ticket change to 3.

 - Now there is another group change, and another view change (VC2)
   while the transactions from ticket 2 end their execution.

Issue: There is a window where the last transaction from ticket 2
already marked itself as being executed but before popping the ticket,
VC2 will pop the ticket instead but never notify any of the
participants.

VC1 stays waiting for the ticket to change forever and the worker
can't be killed.

Solution:

Make the condition wait to break in periods of 1 second so the loop is
responsive to changes to the loop condition.
We also register a stage so the loop is more reponsive to kill signals.

Change-Id: I86eb6d1e470d9728c540f2fbcfb4ba9357eba103
               prepared statements

Description:
Running server commands (\u / use, \s) or calling
C API functions (mysql_refresh, mysql_stat, mysql_dump_debug_info,
mysql_ping, mysql_set_server_option, mysql_list_processes,
mysql_reset_connection) can lead to an error being indicated in
the audit log even when there is no error.

Fix:
A check that prevents logging an error while running / calling
valid server commands and C API functions.

Change-Id: If215d3d2b356d9bce640991c857e41eb42caa947
https://sourceforge.net/projects/libtirpc/files/libtirpc/1.3.3/libtirpc-1.3.3.tar.bz2

Unpack the tarball.

Remove before checkin: m4/lt~obsolete.m4
m4/lt-obsolete.m4 contains illecal characters '~'

Also remove .gitignore, so all neccessary files will be added automatically.

Change-Id: I44858e23c8604c0b4aad3a38175497b48bbae335
(cherry picked from commit 49471bc05c287024a69107eee0a06d6b425a552b)
Add an "external project" for building libtirpc.a

Change-Id: I4227cadcf22221f9b9bed7c48cd46bc00a46281e
(cherry picked from commit 1e5373a76413f8337e9f83490858ef8e2624fb6c)
Empty 'missing' file, to avoid bootstrapping the autoconf stuff.

Change-Id: I664d50b0d4f1feef08fa93d0ab4adb943cbcedbb
(cherry picked from commit 43aa36eec782e868d9e4d8543e803d259047b56e)
Issue description
-----------------
PURGE BINARY LOGS and ALTER TABLE SECONDARY_LOG cannot be executed concurrently.

Analysis
--------
PURGE BINARY LOGS calls is_instance_backup_locked() to check if any other
process is holding BACKUP_LOCK and fails if it finds any. The lock is
not acquired for the duration of the PURGE BINARY LOGS statement which is
incorrect. As result it is possible that some other thread acquires
BACKUP_LOCK after PURGE has returned from is_instance_backup_locked().

Proposed solution
-----------------
PURGE BINARY LOG should acquire IX BACKUP_LOCK. Since IX locks
are compatible with each other, it will not block concurrent DDLs or get
blocked by them. Also, IX locks are not compatible with X locks, so PURGE
acquiring IX BACKUP_LOCK would also prevent a concurrent BACKUP from executing.

Change-Id: Ib0ccc8bea3fd0c2115246eb7fa50280f4292c3c4
…b noise

We are getting automatic alerts from github due to issues with
Python components listed in extra/libcbor/doc/source/requirements.txt.

Just remove the entire doc directory as it's never used anyway.

Change-Id: I58fe1ece430800a87d07fffdbe6253c4561cc281
(cherry picked from commit 5b305fdcdc06f5413871c71ea544a23a6b5dd9c0)
Post-push fix: remove INSTALL sysmlink, doxygen complained
..../INSTALL is not a readable file or directory... skipping.

Change-Id: I57540dac793c11e93b77846deadfc01a95895a51
(cherry picked from commit e5d56dfa8dbe03c04348b9d5c719c9e7dff4c0d2)
           and Item_func_in::populate_bisection

As part of a performance improvement, m_const_array is introduced to
save IN list items across execution. This list is populated either at
the end of preparation or at the start of execution based on the
nature of its objects. Some of its elements (String class object)
have memory allocated from runtime memory (example: input constants)
that are not valid after the end of the current execution. Ownership
of such memory is not passed to the String object at the assignment.
But reuse is attempted in subsequent runs that can lead to unexpected
behavior.

Fix: Release all runtime memory and cleanup pointers in m_const_array
at the end of statement execution.

Change-Id: Ib56b047c54b1b0d9428e9ed515e097ab4e0d5e4d
Change-Id: I2f2ff32a8fc6c3fd8caad5555c923d38e13740bb
Approved-by: Erlend Dahl <[email protected]>
Temporary errors can occur while opening a table from the NDB
dictionary, this kind of error is rare but can be provoked by repeatedly
performing schema operations.
Since failing to open the NDB table definition most often causes fatal
errors down the line, it's most likely better to take some time and
attempt to resolve the temporary error by retrying to open the table.

These kind of temporary errors are primarily important to get rid of
when distributing schema operations throughout the cluster, where many
MySQL Server simultaneously attempts to open table from NDB.

Fix by handling temporary errors while opening table from NDB with
retry. It is acceptable with some delays during these operations in
order for temporary resource shortages to subside while opening tables.

Change-Id: Ie26426d7f4892828d1e759d131fdbda810a5c846
coverage

Add a new test (ndb_rpl_skip_ddl) which exercises the 'skip DDL errors'
part
of the functionality in the Ndb context, as this functionality was added
for Ndb.

Change-Id: If57337634fa574d5420990e98673c21b6e187924
Multi-threaded-applier work for Ndb uncovered "bug#34229520 : Ndb MTA
unordered commits with log" and was fixed in 8.0.33 by commit
2510e0d6a972140476a17e0283572d36eaefb00a.

No testcase was added as the problem only showed up with a storage
engine which handles Binlogging itself (e.g. Ndb).

This patch adds the testcase.

Change-Id: Ib833e37ac5003a3fd4ca4c0cf8a6b6fafc41c1ba
The LAST_ERROR_MESSAGE returned from
performance_schema.replication_applier_status_by_worker when applier has
stopped varies across platforms. The error message contains what seems
to be platform specific messages from Windows and Mac.

Fix by not showing the LAST_ERROR_MESSAGE column.

Change-Id: I1b31ad0910ec8d9c13bd4cc48ba64027eb19b5d4
Separate patch to adapt new(since 8.0.33) test by changing its config
to not allow transaction retries and then expect ER_LOCK_WAIT_TIMEOUT
rather than ER_ERROR_DURING_COMMIT.

Change-Id: Ib0945001855d4f41c956fbc433a8e9660ad9c0be
Issue deprecation warning to stderr on mysql_options() and
mysql_get_option().
Re-recorded the MEB tests
Re-recorded some more MEB tests
Re-recorded some even more MEB tests
Re-recorded some further MEB tests
Re-recorded windows only tests

Change-Id: I5e7cb556779797ca7585daf000608046c3010608
…iphers

Update unacceptable cipher list: 3DES and LOW permanently removed.
Add tls_server_context unit test to harness to verify mandatory,
acceptable, deprecated and unacceptable cipher handling.

Change-Id: I706990e3e6b7420f21f2a5c83137a21492394049
Unpack curl-8.1.2.tar.gz

rm configure ltmain.sh config.guess config.sub Makefile
rm -rf docs m4 packages plan9 projects src tests winbuild

git add curl-8.1.2

Change-Id: Idcbab1853b4fe98d44b0ae6ca41bd154d9b7bf31
(cherry picked from commit ab23e5390964c7bb68c5ab5783f4588d7a5eeacc)
On Oracle Linux 7, we now support -DWITH_SSL=openssl11.
This option will automatically set WITH_CURL=bundled.

"bundled" curl is not supported for other platforms.

Update CURL_VERSION_DIR

Disable in curl cmake files:
 - cmake_minimum_required(VERSION)
 - BUILD_SHARED_LIBS
 - CMAKE_DEBUG_POSTFIX
 - find_package(OpenSSL)
 - install(...)
 - set PICKY_COMPILER OFF by default

Enable in curl cmake files:
 - Keep the FILE protocol for HTTP_ONLY
 - Add dependency on bundled zlib

Change-Id: I089eadaf81e6a02a6f021f5716d2aa61bad78c06
(cherry picked from commit d247b669630a5fd7200a241192f1217d44729aeb)
Remove all old source files.

Change-Id: I6198dc3e59d50a6a68b42e7f7b5041edc6e81637
(cherry picked from commit ccb4fdc00297b8e0d100667bc72d6e192660b8f9)
The server now exits where we used to see an error message.  The
refactoring performed in commit:

  WL#14672: Enable the hypergraph optimizer for UPDATE [7/8, single-table]

has (presumably inadvertently) inverted the logic when we check for
the presence of ORDER BY and LIMIT with multi-table updates. It seems
clear that we should give an error message if ORDER BY or LIMIT is
combined with a de facto multi table update, whether it syntactically
looks like one, or it is hidden inside a view definition.

The solution is to invert the test back, so that these checks are
performed.

The fix solves Bug#35438510.

Change-Id: I662775324cc9738e283d298989572077ac8a696d
Approved-by: Erlend Dahl <[email protected]>
              CTE query

When transforming a subquery to a derived table, if a new field is added,
it should increment the reference count. However, this was not done when
a view reference was replaced with a new field. This made the field
referenced to be deleted when it was wrongly concluded that the field is
unused. Problem arises because the same underlying field is referenced by
all the view references created for that field.

For a query like this one:
WITH  cte AS (SELECT
              (SELECT COUNT(f1) FROM t1),
               COUNT(dt.f2),
               dt.f3
              FROM (SELECT * FROM t1) AS dt)
SELECT * FROM cte;

it gets transformed into

select `derived_2_6`.`COUNT(f1)` AS `(SELECT COUNT(f1) FROM t1)`,
       `derived_2_5`.`COUNT(dt.f2)` AS `COUNT(dt.f2)`,
       `derived_2_5`.`Name_exp_1` AS `f3`
from (select count(`test`.`t1`.`f2`) AS `COUNT(dt.f2)`,
            `test`.`t1`.`f3` AS `Name_exp_1`
      from `test`.`t1`) `derived_2_5`
left join (select count(`test`.`t1`.`f1`) AS `COUNT(f1)`
           from `test`.`t1`) `derived_2_6` on(true) where true;

The expression "Name_exp_1" is a view reference because the derived
table "dt" gets merged. When this is replaced with an "Item_field"
during subquery to derived transformation, we do not increment the
reference count. Later, we see that the derived table "cte" which
is resolved after the creation of the derived tables "derived_2_5"
and "derived_2_6" gets merged. While deleting the un-used fields for
this derived table, we delete the field "f3" even though it is still
used in the query.

Fix is to correctly increment the ref count for the new field created.

Change-Id: I4035d86990e3cfb099144b7f304b864eb0451f5d
@pull pull bot added the ⤵️ pull label Jul 18, 2023
@pull pull bot merged commit 057f5c9 into zhuizhu-95:8.0 Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.