-
Notifications
You must be signed in to change notification settings - Fork 4k
merge #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… 5.0.87 Pre-requisite patch: Get rid of the Statement class. Move old Statement members to THD and/or Prepared_statement. Move old Statement backup/restore logic to sql_prepare.cc. Rewrite Statement_map - only Preppared_statement is stored there. Remove Query_arena::type(), not needed. This makes it easier to reason about the copy of query strings between Prepared_statement and THD, which is the cause of the bug.
WL#7777 Integrate PFS memory instrumentation with InnoDB Account all memory allocations in InnoDB via PFS using the interface provided by "WL#3249 PERFORMANCE SCHEMA, Instrument memory usage". Approved by: Yasufumi, Annamalai (rb:5845)
replication P_S tables Added more fields to replication P_S tables. Following variables that were a part of SHOW STATUS have now been added: Show status like 'Slave_last_heartbeat'; Show status like 'Slave_received_heartbeats'; show status like 'Slave_heartbeat_period'; Show status like 'Slave_retried_transactions';
Post merge fix, reduce memory consumption by default.
…TRUE We should pass sync=false so that we use the asynchronous aio. Approved by Sunny over IM.
…TRUE We should pass sync=false so that we use the asynchronous aio. Approved by Sunny over IM.
1. Fix the test case, DEADLOCK is a full rollback. The COMMIT following the UPDATE is superfluous. 2. Remove the blocking from the trx_t::hit_list if it is rolled back during the record lock enqueue phase 3. When the trx_t::state == TRX_STATE_FORCED_ROLLBACK, return DB_FORCED_ABORT on COMMIT/ROLLBACK requests.
It's a bug introduced by wl6711 on unique_constraint implementation. When unique_constriant is used, Hash value is treated to compare whether two tuples are equal or not. However, Hash function can't assure the uniqueness for different tuples. So we need to compare the content if there is the same Hash value. During compare the content, there is an error for comparing length in function cmp_field_value.
Update a forgotten .result file (it needs --big-test to run)
…. It is incrmented when the transaction is started. Before killing a blocking transaction in the trx_t::hit_list we check if it hasn't been reused to start a new transaction. We kill the transaction only if the version numbers match.
Follow-up checkin. Increasing the timeout so that TCs passes on all machine (including slow one)
… MYSQLBINLOG Background: Some combinations of options for mysqlbinlog are invalid. Those combinations give various error messages. Problem: Some of these error messages were ungrammatical, unnecessarily complex, or partially wrong. Fix: Corrected the messages.
The function has never made an InnoDB redo log checkpoint, which would involve flushing data pages from the buffer pool to the file system. It has only ever flushed the redo log buffer to the redo log files. The actual InnoDB function call has been changed a few times since the introduction of InnoDB to MySQL in 2000, but the semantics never changed. Approved by Vasil Dimov
After refactoring of mysql_upgrade there are descriptions of options that are out of date. --help option can print default values as well as special options. Added options sorting by long name, --help first. Updated options descriptions. Printing special options and default values in help.
After refactoring of mysql_upgrade there are descriptions of options that are out of date. --help option can print default values as well as special options. Fixed compilation issue.
Minor after-commit fix
If SELECT query is cached and later if there exists a session state then the same SELECT query when run will not send OK packet with session state as the result sets is picked from cache. For now disabling deprecate_eof.test when query_cache is ON.
Follow-up patch: Fix compile failure in unit tests when compiling without performance schema. The fix is to remove the PSI_mutex_key object from being included in the unit tests. This is no longer needed since the unit test no longer needs to link in ha_resolve_by_name().
ESTIMATE ON 32 BIT PLATFORMS The innodb_stats_fetch test failed when running on 32 bit platforms due to an "off by one" cardinality number in the result from a query that read from the statitics table in information schema. This test failure is caused by WL#7339. The cardinality numbers that are retrieved from information schema is roughly calculated this way when the table is stored in InnoDB: 1. InnoDB has the number of rows and the cardinality for the index stored in the persistent statistics. In the failing case, InnoDB had 1000 as the number of rows and 3 as the cardinality. 2. InnoDB calculates the records per key value and stores this in the KEY object. This is calculated as 1000/3 and is thus 333.333333.... when using the code from WL#7339 (before this worklog, the rec_per_key value was 166). 3. When filling data into information schema, we re-calculate the cardinality number by using the records per key information (in sql_show.cc): double records= (show_table->file->stats.records / key->records_per_key(j)); table->field[9]->store((longlong) records, TRUE); in this case we first compute the cardinality to be records= 1000 / 333.3333... = 3.0000... or 2.9999999 and then use the cast to get an integer value. On 64-bit platforms, the result of this was 3.00000 which was casted to 3. On 32 bit platforms, the result was 2.999999 which was casted to 2 before inserting it into the information schema table. (in the pre-wl7339 version, the calculated cardinality number was 6 for this case). This issue is caused by having converted to using a float numbers for the records per key estimate. So when re-calculating the cardinatily number in step 3 above, we can easily get a result that is just below the actual correct cardinality number and due to the use of cast, the number is always truncated. The suggested fix for this problem is to round the calculated cardinality number to the nearest integer value before inserting it into the statistics table. This will both avoid the issue with different results on different platforms but it will also produce a more correct cardinality estimate. This change has caused a few other test result files to be re-recorded. The updated cardinality numbers are more correct than the previous.
------------------------------------------------------------ revno: 8734 committer: [email protected] branch nick: mysql-trunk timestamp: Fri 2014-08-29 10:15:45 +0800 message: Commit the missing test case result file for WL#6835. ------------------------------------------------------------ revno: 8732 [merge] committer: Sunny Bains <[email protected]> branch nick: trunk timestamp: Fri 2014-08-29 10:24:18 +1000 message: WL#6835 - InnoDB: GCS Replication: Deterministic Deadlock Handling (High Prio Transactions in InnoDB) Introduce transaction priority. Transactions with a higher priority cannot be rolled back by transactions with a lower priority. A higher priority transaction will jump the lock wait queue and grab the record lock instead of waiting. This code is not currently visible to the users. However, there are debug tests that can exercise the code. It will probably require some additional work once it is used by GCS. rb#6036 Approved by Jimmy Yang.
…PACKAGES Fixed by adding --rpm to mysql_install_db command Also some corrections enterprise -> commercial
Merge of cset 8778 from trunk
Merge of cset 8786 from trunk
Merge of cset 8810 from trunk
…T OF SOLARIS PKG INSTALL Remove --insecure from postinstall-solaris, it was a temporary fix This is a dummy empty commit, as fix has already been applied
Merge of cset 8803 from trunk
Merged cset 8875 from trunk
…ILEGE Merged cset 8876 from trunk
Hi, thank you for submitting this pull request. In order to consider your code we need you to sign the Oracle Contribution Agreement (OCA). Please review the details and follow the instructions at http://www.oracle.com/technetwork/community/oca-486395.html |
Closing pull request as it appears to have been submitted in error (not intended to be a contribution) |
No description provided.